I'm having a hard time reconciling all this right now. On the one hand, from the outside, I can see the actions that Facebook takes and they seem awfully guilty of what they are accused of. But on the other hand, I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.
The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening. It's so big no one can comprehend the big picture of it all, so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
The person who mentions the banality of evil, dannykwells, has an excellent point.
But there's more at play here. I briefly worked on Twitter's anti-abuse engineering. Many of the people on that team cared a lot about protecting people. I sure did. But we didn't have the necessary power to actually solve the problem.
The people who did have that power were senior execs. They might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.
My boss's boss, for example, was an engineering leader who had a climber's resume: quickly advancing through positions of more and more power. In my view, he cared about that a great deal, and did not give a shit about the actual harm to users. As soon as he got the chance, he pushed out my boss, laid off the team's managers, me included, and scattered the people to the wind.
I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
I think this hits the nail on the head. It's not that Facebook or the many people who work there don't care about kids or a deleterious political climate. They do care. It's just about what happens when those concerns conflict with other concerns, such as maximizing user engagement. In my opinion Haugen's testimony and Zuckerberg's response simply confirm this: Haugen talks a lot about the research that was done and how that research was ignored; Zuckerberg points out a lot of (somewhat lacking in context) facts about the size of Facebook's investments in trust and integrity or openness to regulation.
Fair to say it's about incentives? I think it's possible to run a profitable social media company that does care. We're just stuck with too few options. I'm unaware of any that make transparency a core value by, say, openly publishing moderator actions.
I think it's possible to run a profitable social media company that does care.
That's probably true but not at Facebook's scale. If the employees are willing to accept the argument "We could have increased profit but chose not to in the interests of users.", because everyone's bonuses and stock value is based on profit, you'd need to hire a lot of people who are happy to trade some amount of personal gain for user wellbeing. That would be difficult when you have tens of thousands of people.
Alternatively, maybe it'd be possible to run Facebook without giving the staff equity or bonuses. That would take a huge shift in the way tech renumeration works though.
This comment consider employees (people) as if they were algorithms to maximize personal revenue.
That is not necessarily true. If FB where a place where people could feel proud of the good it does, a lot would be ok taking a pay cut.
Sure, maybe the ones that care the most about compensation wouldn't join, but that might be ok.
I worked previously for a for-profit company which was devolving a massive amount of their profits to charity. It was something I was proud of, and a part of my evaluation of how much I liked working there.
It seems increasingly likely that if FB keeps ignoring these concerns they're going to get hit with some sort of government regulation. So I still think it's in their long-term self-interest to try and avoid that happening, even if we assume profit is really all they care about.
if they can do this, let everyone trust they do this, rheir company will beyond all organizations in human history. you can let them take care your kids!
>I think it's possible to run a profitable social media company that does care.
No. Due to network effect successful company must be large, and the large company lives by different rules. FB for example is a trillion dollar company. It is like a TeV energy level in physics. Different behavior of matter. At those energy levels chemistry for example just doesn't exist, and matter itself changes as protons break apart. Ethics in big business is like chemistry in high energy physics - it just doesn't exist at those levels of money. At these levels they can be affected only by comparable level of money or power, like the government power.
I can't see it working. Instagram and WhatsApp should be spun off, absolutely no question about it. But Instagram and Facebook still remain way too big.
And how do you break them up? By geographic area? People would be pissed because they have contacts with, or maybe want to watch vacation videos from people from different areas.
IMHO the only options would be:
* non-profit run by the UN or similar
* forced to open all data and APIs to make it a platform, with rules to make it an even playing field
In both cases with much less, and why not zero, "engagement" focus ( just show the latest stuff chronologically from the things you've liked/subscribed to). Could either work? No idea, but seems to me that they have better chance than a Facebook per country or region.
It's probably too late for any specific publicly traded company that already exists, but social media as a protocol, with forced interoperability if necessary, is the way to solve this. Many regional phone carriers doesn't prevent people from communicating across regions.
Of course, it does cost more, so the question becomes who is willing and able to pay for a socially less harmful means of networking individuals without having to put them all on a single data hoovering ad platform? Whether it's direct charge to consumers or subsidized by government, the money still ultimately has to come from people.
Implement ActivityPub (an existing protocol recommended by the W3C), and offer their underlying social networking services as hosted and managed software for big orgs to operate on their own domain. With interoperability, they'll work across domains. Target customer is anyone with at least 100K followers (so gov, institutions, media, et cetera).
And the US would ban any discussion on Guantanamo and CIA torture. What's your point?
It should be run independently, like the WHO, not directly under the power of the security council. And before you say Taiwan, the Republic of China is not a UN member, and there's nothing the UN or WHO can do about it, it's between China and them.
I am unfamiliar with cases where the US prevented CNN (or others) from discussing Guantanamo.
I am familiar with the extreme lengths China would go through to prevent discussion on topics it hates though (e.g. if HN was based in China both of our comments would have been deleted immediately)
And who is going to break up TikTok or whatever comes next/instead?
The social media is like tobacco, and FB increasing the engagement at all costs is like when Big Tobacco were increasing addictiveness of the cigarettes. Like with tobacco, the way to deal with the issue is to wean the population off by in particular educating the population about the damage it does to them .
Btw, "Statement from Mark Zuckerberg" - that reminded that foundational tenet of Facebook :
Zuck: yea so if you ever need info about anyone at harvard
Zuck: just ask
Zuck: i have over 4000 emails, pictures, addresses, sns
Also, he was right. People shouldn't have sent Zuck their private info even if Zuck were a saint, because there was no way they could have known Zuckerberg was a saint.
how many stupid things you said at 19 made you $100B? I'm pretty sure that if a stupid thing you said at 19 had made you a $100B by the age of 30 and continued to made many billions after that you'd pretty much continue to believe and follow that stupid thing.
I honestly don't know if it's possible. Common sense seems to suggest that social networks become more harmful as they becomes bigger and more profitable. That's just my surface-level take though. I guess the meta-question is what to do about highly-profitable lines of business that cause demonstrable social harm. Clearly regulation and government intervention are one answer, as are reforms to corporate governance structures (e.g. German and Nordic rules around worker representation on corporate boards.)
While I'm sympathetic to your anti-FB bias, I think it's pretty clear that by any reasonable definition Facebook is indeed a social network (which also happens to make gobs of money through advertising, and covets engagement because that increases ad revenue as well as a general sense of "platform health"). The question posed was whether "being bad for society" is an emergent property of social networks as they grow.
It’s quite hard to see how it’s just a business that doesn’t give one shit about you no matter now many followers you have or how many likes you got when you’re still using it.
An actual social network would care how you feel today.
A social network is just a kind of business, like a chain of supermarkets or a steel mill. You could, of course, have a non-profit social network (or a business with a different corporate structure - as I allude to in a parent post), but that's not what Facebook is or what its peers are. In the United States, businesses are beholden first and foremost to their shareholders, which means that there's an inherent tension when "giving a shit about you" and "making money" come into conflict. What makes you think that a social network with a normal corporate structure and in the absence of countervailing regulation would have more empathy than any other business?
As an aside, I'd like to make the broader point that boiling everything down to "Facebook=evil" doesn't seem like a productive way to get the changes that I think both of us would like to see. It walks right into the strawman arguments that Zuckerberg is responding to in his post ("Why would we invest so much in trust and research if we didn't care?"). And it doesn't capture the fact that Facebook is a huge entity comprised of a lot of people, many of whom have different incentives and some of whom are even trying to do the right thing (see: Anna Haugen).
You’re trying to appeal to “my better nature”, to help me believe that FB can be changed. It wont work.
In 2007 or 2008 I promoted the idea amongst my friends to “poison the well”, to feed bad data into FB’s algorithms. If they enjoy Coke, talk about Pepsi, I said. If they vote Green, share far right news. I called it Falsebook.
It didn’t work, because the problems inherent in the advertising auctions are not very visible to users. They mesh with the echo chambers of our friend circles and fizzle into the background. We seek out echo chambers in order to feel safe and validated. It’s mostly fine when it’s just humans relaxing with friends. But when that echo is robotically generated from an accurate model of all individuals involved, it can be leveraged for all sorts of big scale shady crap. Advertising is the village idiot of this town and corportate-backed political propaganda is the warring gang lord.
And Zuck does what any market owner does.. sits back and rakes in profit, cleaning up the mess when it suits him. And mostly it doesn’t.
I’ve never felt it could be fixed from the inside. I’ve never thought it was a good idea to begin with but I let peer pressure and my magpie nature suck me in. I regret signing up for FB and GMail back in the day because corporate surveillance has fucked our society hard and let sociopaths run rampant.
You will not change my mind. I wasn’t contributing to the conversation in good faith. I shall assume you were. My original glib comment about FB abusing social networks wasn’t an invitation to learn a new perspective, it was a war cry, a flag raise, a call for comrades. I was hoping someone would respond with links to a new Scuttlebutt implementation or tell me about a cool Masto instance. Or try to explain why I should bother with Matrix. Or something more interesting than all of those.
The web is sick. Personalised advertising has made it ill. We need to fix it and I don’t believe FB or GOOG are interested in trying.
An actual social network would let you choose how you feel, which is why you can choose your friends who might all be downers and feel horrible, and the social network will let them share their despair with you.
> Common sense seems to suggest that social networks become more harmful as they becomes bigger and more profitable.
Yeah I don't know that anyone could have predicted these networks would become too massive to displace. Intervention should be on the table, no doubt about it. I hope they bring up net neutrality and Facebook's violation of it via internet.org. I thought "Free Basics" had died but apparently it's still going.
> It's just about what happens when those concerns conflict with other concerns, such as maximizing user engagement.
They care about their own kids more than they care about abstract faceless millions of other kids on FB. Moral bankruptcy starts when they harm other kids in order to put food on a table for their own children.
I would not be surprised if some of the FB's employees ban their kids from using it.
I don't necessarily subscribe to the Gervais Principle[1] other than thinking it's an interesting lens through which to reexamine motives and motivations of coworkers, but sometimes the terminology is damn apt (at least for one group...).
Bingo. I never interacted with a person on the FB integrity teams who didn't care deeply about these problems - but their solutions never seemed to make it into production. Whether that was because of the unintentional friction of bureaucracy, or the explicit wishes of execs, is somewhat immaterial in the final analysis.
Meant to by whom? I was very serious about it. So was my boss and most of the people on my team. Beyond that, I don't have direct data.
My guess is that execs would have been very happy if we could have quickly solved the problem in a way where revenue and growth were not harmed and nobody important had to go out of their way.
But online abuse isn't like that. It's a hard problem. So I think execs were satisfied to say they were making a big effort, celebrate some modest gains, and then stop thinking about the problem once it wasn't a giant PR/regulatory issue for them.
So it's more like how a lot of people mean to get fit or lose weight. If it's New Year's Day or their doctor scares them enough, they'll get real serious for a while. They probably do mean it, but they mean a lot of other things too, and those win out.
I think you're exactly describing a "Potemkin village in case anyone accuses the execs of not caring." The people in power weren't serious about solving the problem (because the only things they'd accept were rearranging the deck chairs on the Titanic or an easy magic solution that could never actually exist), and the main benefit your team provided was PR/regulatory cover for the organization.
I would posit that no company "actually" cares. The premise of twitter isn't "the social platform that has no abuse". If that were the main goal, i'd imagined they'd be another nothing startup that ran out of money and died years ago. So then if only companies that don't have social good as their primary goals are the ones that would ever exist in the first place, then it feels that trying to judge companies for doing so seems not particularly useful.
> So then if only companies that don't have social good as their primary goals are the ones that would ever exist in the first place
Or alternatively, we could try to view this as the root problem and try to fix it.
Edit: Note there is also a difference between "not having social good as their primary goal" and working effectively against the social good, whether intentional or not.
no, a Potemkin village is never meant to be real, the parent commenter suggested it was like New Year's resolutions that are meant to be real but in the end people fail because they like their New York cheesecake too much to change.
Then the doctor tells them again you need to go on a diet or you will have a heart attack, and they go on the diet for a couple months.
having a heart attack may solve the issue.
Of course there are clear eyed people who see the situation for what it is, those that stay on don't care and consider the efforts to fight abuse as a Potemkin village.
Exactly. To me a Potemkin village is one or two steps further away from reality. The Potemkin village is unoccupied and has no potential for occupation. All involved in its construction know it's fake.
My team was sincere, worked hard, and definitely got some good stuff done. Just not nearly as much as we wanted.
Same thing then. In the metaphor, the buildings' only purpose and only outcome is to fool. One of our purposes was to get things done. And we definitely had some impact. If the public/regulatory pressure stayed constant, we would have gotten some more done. We would also have gotten more done if executives had taken it more seriously, of course.
So here's a much more general statement: your identity and your sense of self, your consciousness, your "choices" that build your life's narrative, are all actually a "Potemkin village in case anyone accuses the execs of not caring". The "anyone" in this scenario being other social agents you interact with (see [1] for further thoughts on this).
> the main benefit your team provided was PR/regulatory cover for the organization
This sort of reasoning seems to be applicable on many levels of social organization, from brains to countries. Most of the stuff your brain does is for show / self-delusion, most of the stuff any community or global organization does is also for show / self-delusion. It's "Potemkin villages" all the way down.
This is a place where the founder-CEO-demigod like Zuck should able to make better decisions than a professional team like Twitter. The long term profit maximization strategy was to maximize profits only up to the point where you risk getting regulated by government. With all the fawning praise of him as a kid I don’t think Zuck envisioned that one day both Democrats and Republicans would be united in their desire to fuck him over.
Zuck is desperate to be regulated: he asks for Congress to step in every time he’s asked.
Regulation would be awesome for Facebook: not only would it be a fig leaf for all their social problems (“hey, it’s not our problem anymore”), it would also stifle any potential competitors out of their market. The costs of regulation are regressive: much more easily absorbed by BigCos than any startup.
Could be. Some of the proposed regulation I've seen for this specifically exempts companies under, say, $100m/year in annual revenues. So legislators aren't unaware of the problem.
Another possibility is that Facebook knows that asking Congress to do something is either a) not going to increase the odds of them doing anything, b) actually decrease the odds by sounding contrite, or c) puts it in a place where Facebook's army of lobbyists and otherwise connected individuals make sure nothing meaningful will get passed into law.
It's not a bad bet given how polarized Congress and the American electorate are. And gosh, who is a big enabler of that polarization?
The base problem isn’t really solveable, and its as much of a public discussion on what we want to do with speech first, before its a question of how we want Social Media firms to act.
In the end, there is no algorithm which can match the scale of bad content, no robust definition of bad content which can work without creating a flood of false positives.
Every false positive is now someone who had something valid to say who is silenced.
How are we going to decide which grey area speech is unwelcome (leaving out obvious things that are illegal).
————
The popular idea is increased human centric moderation, but thats still going to be 2k email escalations for one region per day, at a 10% escalation ratio from a base of 20k.
It only appears unsolvable because you've presumed that social media should exist in its current form. Yes, algorithms can't match the scale of the bad content before we hit AGI. But that problem only exists because we have for-profit companies hosting way more content than they can afford to police on the thin margins ads provide. (Twitter, for example, makes about $1 per user per month.)
Prior to the late 2000s, this problem didn't exist. In alternate universes, it surely doesn't; there are many ways this could have gone.
Not OP. My impression is that a lot of the focus is around safety and specifically privacy. Both play right into social media giants' hands.
Instead we need to target Ads. Almost all problems can be eventually traced back to ads. At the very least traced back to the money incentive that ads create, on any platform.
I would go one step further and suggest that ads are an issue for certain types of Economic games or Markets.
Any industry that depends on Ads tends to consolidate, and has an issue of incentives - the more people on the network, the more likely the network is able to survive.
On a tech forum people assume that challengers have better tech - but I would argue that challengers actually allow for more salacious/engaging content.
This is what creates the race to the bottom.
If the race to the bottom can be stopped - i.e. an incentive structure created that stops engagement being the primary metric, then the rest of the downstream problems are largely prevented.
Thats my root cause assessment of the situation. However once I get to this point, any solution seems to be a mess of intersecting fields ranging from morality, legal constraints, issues with press freedoms, free speech etc. etc.
So... I guess how do we set up incentives to not allow the most "engaging" content to dominate?
d) He realizes that there are opposing factions with different ideas of what needs to happen, and it's impossible for his company to please them all, so pushing the decision to some semblence of a vote that claims to represent everyone is the only way to put an end to the endless arguing.
or
e) He believes what he wrote and doesn't think these social issues should be decided by corporations.
Either interpretation is fine and a lot more generous than yours.
He also wants the legislation to be toothless or misplaced so Facebook can pay lip service and not be fundamentally altered. Haugen just offered congress more surgical solutions than they were drafting as well as valid critiques of their drafts. That's why this particular post from Zuckerberg comes with such a large side of koolaid.
That's an extremely pessimistic view of regulation. Somehow other industries manage to do fine despite it. In the EU, GDPR somehow hasn't snuffed out all small businesses.
I mean, the harmful effects of regulation are much less visible then the harmful effects of what is being regulated. We don't know to what degree the GDPR has snuffed out potential small businesses.
"The long term profit maximization strategy was to maximize profits only up to the point where you risk getting regulated by government." That is correct, but as with Wall ST, the addiction to the gamble has long been unhinged by an environment that has been lax on regulatory options...so when the blowback actually hits, it will not be just painful it will likely come on the tail end of a ragnarok-grade event that will likely shank the whole industry for a generation because current C-suites are so far up their own asses they can take direct inventory of the last few days of meal plans.
Zuck started Facebook as Facemash, a site to judge attractiveness of students by other students. It was a creepy site then and has remained to this day. Sorry, but companies to take after their founders, because the founders set an example of what is OK, how to behave, what their values are, etc. While he may not be responsible for all of the actions of his employees, the buck stops with him.
Are you saying that people on teamblind are only focusing on the TC aspect, and also care about many other things, or that only a subset of people only care about TC? If the latter, it’s still worrisome that so many of those people who hop around chasing only TC and not caring about subject matter, are highly valued at these companies like Facebook. Glib and shallow E6 and E7 don’t make for great tech leadership in a company that supposedly cares about ethics and safety.
Seems like very few engineers that make it to E6+ are the ones rabbling on about "TC or GTFO" on Blind. It's mostly more junior engineers or bad engineers that will never make it past the 2nd or 3rd rung of the career ladder.
I get your point, but glib and shallow climbers are exactly what you want for a company that only supposedly cares. They're very good at supposedly caring!
> I presume the same was true about the senior execs. They were aware Twitter was causing harm to people. If they wanted to know the details, we had plenty of research and they could have ordered more. Did they care? Impossible to know. But what they focused on was growth and revenue. Abuse was a big deal internally only as long as it was a big deal in the press.
Could this just be an issue of too many problems to care about and not enough time to solve them all or do you think the indifference was intentional?
I worked at FB (not for very long) but you can trace everything back to the awful performance process they have (the infamous PSC). At the end of the day, hard to measure stuff doesn't get you promoted while tangibly moving metrics does. If you incentivize people that way, it doesn't take anyone WILLINGLY doing anything evil to end up with a pretty evil thing on your hand.
If you optimize for profits and only that, you always end up selling crack cause it's the best business in the world, and that's why it's illegal.
I agree their incentive structure is at the root of this. But this is an incentive structure designed by one group of conscious actors and then followed by another group. A bunch of people choose this. And given the many years of public critique of Facebook, they can hardly be unaware of what they're choosing.
The truth is that almost anybody could sell crack. Most of us choose not to.
Too many problems to care about and not enough time? That's the human condition. What defines us is the choices we make, the priorities they set.
I can't know what they felt when they made those choices. But I can see the choices and the outcomes. I get there's some theoretical difference between willfully fucking people over to get rich and being so blinded by eagerness to get rich that you fuck people over as a side effect. But either way they worked very hard to get positions of power that affected millions and then were indifferent to the harm they caused, so it's not like this happened by accident.
Abuse is measurable in all sorts of ways. The most clear one is having experts take a look at a random sample of users and see if they're being abused. You can back that up with interviews to look for both their take on what's happening and a variety of trauma markers. And there are all sorts of other measures that correlate.
But if there somehow weren't ways to measure it? Then they would have created a product where they couldn't even tell that they were harming people. That right there is something that shouldn't exist.
Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.
Even worse, preventing abuse and other social media ills often lessens engagement, and you know what that means.
>Yep, ask yourself how much identifiable return "preventing abuse" has, and then you have your answer for exactly how much these companies actually care about it.
There is a phenomenon I have witnessed working both in high growth startups and traditional Fortune 500s. At some point, the company starts attracting Dark Triad personality types that cement themselves in upper management positions, usually starting at Director level. These people are extremely dangerous, one of them had access to my corporate laptop (as was standard policy for that company) and would torment me by screwing with me on a daily basis.
When an organization becomes too large or bureaucratic, these Dark Triad types hide and typically exert their influence and power and will behind the scenes. This is why these companies seem “evil”, but it’s usually not the founders’ fault, a lot of times they’re unaware of it, or one of the founders is also a sociopath and will protect the evil cabal. That’s my two cents about it anyway.
Extremely insightful, I have had the same experience and I agree. I somehow have the ability to "sniff" out these types pretty quickly, something about their conversations give them away.
Here's an real life example: I worked for a small startup years ago and the founder invited the engineering team to lunch at a restraunt. The founder proceeded to berate the waiter for no reason and yell at him, it seemed like completely psychopathic behavior and then I caught the smallest of smiles from him after the waiter walked away beaten up(metaphorically speaking) by the barrage. I knew right then this person was not someone I wanted to work for and put in my notice shortly after.
This happened at a company I was at. Hyper-growth startup, huge aura around it.
A few high-level folks arrived whose perfection in smooth talking was rivaled only by their enjoyment of wreaking havoc on teams & relationships. They brought in their friends, paranoia and rumors spread, culture went off a cliff, CEO was confused what happened. Mass exodus followed.
You don't just filter to hire, you also unfilter employee concerns. Most people think that CEO's, founders, owners, "management" are not going to back them, and they're generally right. Therefore most problems never skip the chain, and end up being suppressed by their own supervisors.
Assess for technical skill and raw tactical ability. This would have the downside of filtering genuinely good leaders who've been too far removed from technology (and thus are pure people leaders) for too long. People that were once technologists, but no longer are, have a way of speaking that's easily clear to pick out. Also, some people move up but keep their technical chops along the way. An executive that can microscope on parts of the org, as needed, with a technical mindset, can provide material value.
I do it via interviews oriented on doing the actual work. The more an interview tests the ability to talk about work and be charming and persuasive, the more it advantages awful people.
For software development, that's the opposite of what I want. Some of the best people I've worked with were terrible at interviewing. But once we got into actual code, they settled down and their skills shined through.
How I find them is in their speech. They come across as extremely insincere and "out" themselves by how they talk to either the interviewer(focus alot on themselves) or someone they believe is beneath them(dismissive). If you are a gatekeeper for their employment or something they need expect flattery and overly kind words. If you are no longer that gatekeeper expect to never hear from them again or abuse. Also this type tends to lie alot, thats usually how they do get canned.
"Just call BS! Geez, that was easy. Why are all these HR people so incompetent?"
This assumes you are more intelligent than them, and can see through their insincerity during an interview process.
But have you considered that these "smooth talkers" can be smarter than you? Or at least have been perfecting their BS craft (while you perfected yours, such as programming), so that you are absolutely no match?
For sure. Standard lines of BS don't work well on me because they're optimized for other people. But I firmly believe that for every person, there's a line of bullshit they're vulnerable to. And the people most vulnerable are the ones most sure they're too sharp to be BSed.
You're absolutely 100% correct. About 2.5% of people have have an inverted viewpoint of their own survival. Meaning, they think they need to squash others down in order to feel superior. As opposed to the logical path which is raise your team together and effectively lead to greater prosperity for all.
I think the bug in most organizations is that performance is reviewed only by the people above with very little to no input from below.
edit: Took me down memory land to re-visit some truly incompetent and insanely unproductive bootlickers and fast talkers who would lose their job immediately after a 5 min confidential talk with any of their underlings.
For sure. A friend at Apple says that it was much better to work at when it was not as successful. Now that they have large piles of money, it attracts people who seek proximity to large piles of money.
Reminds me if Mao, he'd always say whatever it took to gain power. But when he had it, he used it solely for personal gain or for personal goals... regardless of how many people got hurt, or killed as a result.
Mao Zedong was a "founder" of a militant "startup" called the Communist Party of China during a vast civil war; his experience has virtually nothing in common with a non-founder career climber in 21st century SV.
Totally agree with you, most of the time founders are not aware this is happening down the ladder. I have seen this scenario with the middle layer management protecting their jobs and gate keeping in most companies today.
Either the founder isn't aware, then they just get egulfed in the new culture until they have, at some point, no power left. Or they are among the worst of the pack and drive the new culture. Given that Zuck somehow managed to retain 50+% of voting rights until now, I would put him in the second group. Jeff would be the same.
And then you would have the truly exceptional people, that manage to combine the ruthless drive needed to grow a successful company, care about their people and have the capabilities to keep those bad actors out, or at least in check. I saw maybe 1.5 of these people in management functions, middle management that is. Not sure of those black swan unicorns exist as founders so.
> they might say that they cared. In their heart of hearts, perhaps they even did. But their behavior demonstrated that they cared about other things much more.
Isn't it just about that in the end? I think being good or not is about whether you give yourself the room to do the right thing even when other pressures exist – because they always exist.
Being good can be hard, because sometimes it means you have to abandon your usual priorities and stand up to the consequences which will emerge from that decision.
I completely agree with you. The real judge if someone is doing something good, is when they do it despite the consequences. Otherwise it's meaningless, they simply are not being evil.
Just curious - given your boss's boss is so self-interested, what advantage could he gain from pushing out a subordinate and laying off all the people below?
Whenever I am having a hard time understanding a situation, someone's motives etc in the world of business/politics, I start with follow the money, and it helps. It might sound cliche, but it is also true in a majority of the cases
This is exactly the point Frances Haugen is making, and it's why this is so different and so much more significant than the other Facebook scandals and leaks in the past.
Haugen repeated over and over again her testimony today that Facebook is full of smart, thoughtful, kind, well-intentioned people, and that she has great empathy for them, even Mark Zuckerberg. Her point is that they have created a system of incentives that are inexorably leading to harmful outcomes. It is not about good and evil people, it is about the incentives. It's exactly as you are saying.
That's why she is not advocating to punish Facebook for being evil, but rather to force Facebook to reveal and permit research so we can understand the system and fix it, because Facebook is too deeply trapped in its own tangle of incentives to fix itself. In this I think she is absolutely correct.
"Facebook has created a system of incentives that are inexorably leading to harmful outcomes" Exactly right. The solution baffles me. "Force Facebook to reveal and permit research so we understand the system and fix it" Basically keep the harmful system in place, but pass the reins to an unspecified cabal hiding under the innocuous word "we". Hard pass.
The "lean in" people and incentives have made society suffer for profit. Perhaps we can define a better set of incentives that reward companies of people building products.
We can. It's called pay directly for the services you use. It is a time-honored system where you give providers money in exchange for goods and services. In response, their incentive is to keep you happy and healthy and prosperous so you can continue to give them money.
No, their incentive is to get your money and get other people like you in case you die on 'em. They don't need you specifically, and they extra much don't need you to be prosperous.
The message 'you can save all this money, using us!' always means 'you can spend all this money with us'. I'm not faulting the general system or even your point here: I am, however, suggesting that while the system is fine it does NOT in any way imply that such people have or feel ANY incentive to your well-being.
You could maybe make a case that such a company might feel an incentive to the POPULATION it depends on… but even then, I feel like that might be mythical. In theory you don't want to eat your own seed corn, but such incentives toward good behavior are so easily ignored… and even if they are honored, it's a collective concern, NOT personal.
They don't care about you, and you are damn lucky if they care even a little about your wellbeing as a class or demographic… most likely they do not. And that's where the system tends to break down.
> In response, their incentive is to keep you happy and healthy and prosperous so you can continue to give them money.
Their incentive is to find a way to get your money; we can see in the world around us that many of them have no problem if you're insecure, addicted, and indebted.
Which will never happen. That takes customer impetus, it's not there. People don't understand the cost of the free products they use, they are unlikely to switch.
So what ways would influence your outcome to actually happen? Because I think it would be the right way to run software platforms as well, I just don't see a pathway there that isn't heavy handed.
I would be for regulating the advertising industry, since I feel it is the root of all this. None of the unethical software magnates would exist if not for the advertising dollars pouring through the door thanks to the ad-tech apparatuses they have built, and the poor incentives that creates. But that regulation is challenging and unlikely too.
I think a freemium model would be better. You should have to pay for having a large number of followers/friends past a certain point.
For example, maybe an account with 1,000 friends is free. Up to 10,000: $5 / month, up to 100,000 $50/month, and so on.
If you're Kim Kardashian with 250 million followers and you're making millions of dollars hawking skin cream or whatever, you can afford to pay a few thousand dollars a month to reach your large, valuable audience.
This way, the content creators can sell ads if they want. The platform doesn't sell ads. Users only see ads if they follow a creator who has sponsors. It's up to that creator to make their content worthwhile enough for people to choose to follow them in spite of the ads.
A platform should be like a company that sells TV broadcast towers. They give people a way to reach an audience. What that content creator does with their audience is up to them. Maybe they could charge a subscription. Maybe they get sponsors. If it's a large non-profit or government organization, maybe they pay at a lower rate or get to use it for free.
Granularity matters. Social feed granularity is small enough that an algorithm, even primitive, can sketch an arbitrary narrative on the spot by juxtaposing unrelated items, akin to a ransom note built from letters cut off from different publications. Pandora and especially GoodReads have large granularity, making it difficult to employ in the same manner.
Very different outcomes I'd say. Are friends and family getting torn apart on those platforms? Do they need armies of moderators to remove abuse material or fact-check posts? (I'm sure there's some, but not on the same scale as Facebook.) This is the first I've ever heard such a thing suggested, and certainly haven't observed it personally.
It may be that facebook can't fix itself, but what makes anyone think an even larger and more powerful organization is the answer and won't itself succumb to its own system of incentives? She is pushing for the equivalent of The Ministry of Truth.
Remember, this is the system of incentives that had us spend 20 disastrous years in Afghanistan, across both parties. And has failed to deal with climate change. And healthcare. And education. And wealth inequality. And housing. And... Siri, what's the definition of insane?
By the way let's give a name to that system, it's called "PSC". Google it. It's the most absurd and ineffective performance management system I've ever witnessed.
It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.
> It creates a Hunger Games mentality within teams and makes doing anything that actually matters virtually impossible, generating an infinite sequence of half assed 6 months projects that get systematically abandoned as soon as the people responsible manage to get promoted or switch teams.
That's a bit of an over dramatization, PSC is just peer feedback, and is very similar to Perf reviews at Google as well as other large SV tech companies. Having done both I didn't experience this "Hunger Games mentality" you described.
> they have created a system of incentives that are inexorably leading to harmful outcomes
If the people inside are "smart, thoughtful, kind, well-intentioned people", they would have tried to work around the incentive, influence them, denounce them, or quit.
It rarely happened. Most of the time, the just take the money, and goes with the flow.
A long time ago, I worked in a startup full of smart, thoughtful, kind, well-intentioned people. Of course, there was also a CEO who was a ruthless manipulator and managed to make everyone believe that they were working for the greater good. In truth, in the course of lining his pockets, he was in the process of destroying several employees and former employees.
Getting past the illusion was hard.
Taking a stand against said CEO while nobody else was aware of the problem? Really, really hard.
Now, instead of a few dozen employees, Facebook has tens of thousands. I assume that all of them are subject to permanent propaganda, as in many tech companies, and that the semi-official word is that they are being misunderstood by the rest of the world, because of course they are doing the right thing but the problem is harder than people think (well, that last part is true, at least). I suspect that it's even harder to go against the flow.
How is giving access to user data for "research" is better than that whole data privacy scandal with Cambridge Analytica.
these days research comes with a set of politically charged assumptions, for example the definitions of "hate speech" and "misinformation" are different based on which political camp you ask
So giving access to Cambridge Analytica is bad but to some other partisan "think tank" is fine? who would make those decisions?
The government has systems in place for doing this. For example, the SBA has tons of data about small businesses across the country. You can get access to it... IF you are part of a research institution and go directly to their dedicated research facility so that you can't exfiltrate the data. Such a model is open, just not free. It would probably be the right model for this issue.
I don't think it's an emergent property, I think it's a by-product of the constraints. It's all well and good that they want to make Facebook safe and healthy, and I honestly believe plenty of people working there are trying to do just that. However, they are operating under the constraint that they cannot move backwards on profits, and therefore engagement.
Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
> I don't think it's an emergent property, I think it's a by-product of the constraints.
> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels.
There is one person who controls all the constraints: Zuckerberg. He even went so far as to enforce that through his stock classifications. It’s entirely understandable and acceptable to have empathy for those working at FB who are attempting to solve the problems. But Zuckerberg made the decision to be the single source of the constraints that bind everyone below. And his constraints are: profit over all else. He should face consequences for setting those constraints, just as anyone should who set a constraint of “address climate change without adversely effecting GDP”.
Separately, and as the “revelations” of Zuckerberg’s immoral behavior continues year after year, those who work for him but are attempting to solve the problems, should recognize at some point in the future, now, or in the past that the problems are insurmountable within the confines of the constraints. As that knowledge spreads, then the question becomes whether those idealistically earnest individuals are justifiably ignorant of the reality: that all their best intentions are moot in the face of the constraints as were determined by Zuckerberg. And when or if they are no longer justifiably ignorant, they become culpable.
Zuckerberg is simply over his head and I think he knows it(I certainly wouldn't want to be in his shoes). I don't think he's evil I think he was enamored of this toy he built, he pushed it in very logical "business" directions, and now it's been adopted by so many people and its so big, its business model is having real world impact where I'm sure he'd prefer, from an intellectual perspective, that it acted totally passively. He's right, no business should have to determine the morals of a society, which is essentially what we are asking of facebook. The bigger picture is more complex than most people realize.
I think you are overly generous to him. He has extremely powerful tools at his hand, and he properly owns them and has absolute power over them.
But due to whatever reasons (ego, greed of seeing his net worth rising and fear of losing some of it etc.) he won't take morally right step that would harm FB's financials in any way.
On top of that, let's be clear - the mission of FB never was some altruistic connecting the world, in contrary - it was all that juicy private data on each of us while we are connecting and interacting, quietly building a shadow profile for every single human being. There is no moral high ground there no matter how much mental gymnastics you try. If FB would somehow leak those data publicly, the company would go bust very quickly.
In more than 1 way, I struggle to understand these whistleblowers - they get hired for tons of money into company with clearly amoral (or at very best dubious) mission and then they are surprised when it actually is... Similar case would be going to investment or private banking and then being surprised how business is set up and how decision makers in it behave
Nobody is forcing him to keep doing this. He’s waking up every day and making the choice to keep running FB today the same way he ran it yesterday. He could just quit
This is a rebuttal to the “he’s in over his head” argument. If he personally is in over his head, the obvious solution is to quit and let someone more capable run the company.
I personally do not buy the “in over his head” argument, fwiw.
> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
> Imagine if you were trying to fix climate change, but under the condition that you weren't allowed to burn fewer fossil fuels. You may try very hard, and very sincerely, but it's fool's errand.
This also happens to be the literal policy towards climate change of China. They announced the pause of funds for external coal plants, doubled down on their internal ones.
A rule more analogous to Facebook's presumed position would be, "you can fix climate change, but you can't do anything that would reduce GDP per capita". Which in practice means that while some useful tools would be on the table, others would definitely not be.
Not really since facebook revenue is much more directly tied to engagnement than GDP ia to fossile fuel consumption.
To extens the metaphor, Facebook's "alternative energy" is non-advertising based revenue. I see zero efforts from facebook to move away from ad based revenue so there is zero chance that Facebook is going to make meaningful progress in changing.
Agreed. Any moderately complex system can have emergent behavior. This is a fundamental feature of complexity. You can sometimes take advantage of it, by finding unexpectedly profitable features that only exist at scale or in conditions you happened into.
When that emergent behavior both increases profits and takes advantage of your customers in negative ways, the relationship moves from something symbiotic to something parasitic. This is where it begins to cross the line and people start throwing "evil" around when describing you.
I don't think most companies are out to create a worse world, but many do it until they are forced to reset.
I think a more apt analogy has advertising as Facebook's fossil-fuel burning, but then I expect severely curtailing fossil-fuel use will severely reduce GDP, which I am guessing is not a common belief around here. (I'm guessing that many around here think that it is essentially just the stubbornness of those in power that keeps fossil-fuel use high, and that even if we force the whole world economy to transition to 100% renewals over, e.g., the next 5 years, things will turn out fine.)
The hypothetical "most people" in this statement would be dreadfully, dreadfully wrong if they think it would all turn out fine. They are vastly under estimating how much the modern world, in literally almost any aspect you can imagine, is reliant on fossil fuels.
Although if this plan were viable, one can imagine the Zoloft marketing department would have already purchased the ad space. The limiting factor is probably the need for a psychiatrist to approve a prescription for every patient. Most users in developed countries are already eligible to get the itself drug for free, or at least cheaply.
> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forest from the trees.
Ah, this is what I think of as Schrodinger's Accountability. Zuckerberg and Facebook's senior execs are simultaneously: A) so brilliant for running Facebook that they deserve to be incredibly rich, and B) so normal that they can't possibly be expected to understand the consequences of their actions, and so are morally blameless. Heads they win, tails we lose.
I say it's one or the other. If Facebook is too big to be understood, it should be broken up into small enough units that mere mortals can see the forest and tend it responsibly. And if not, the execs should be morally and legally culpable for the harm it does.
You may be missing the point if you think your point is orthogonal to theirs. Mark Zuckerberg doesn't have to be painted as a reptiloid in order for his actions to be bad, or for those actions to cause harm. More than blaming shit needs to get fixed right? We can still hold people culpable, but we don't need to do that, don't need to indict anyone before trying to fix a problem that is self perpetuating due to individual incentives and a complete lack of oversight.
The senior execs are the ones who set up the incentive systems there. They are the ones who are richly paid to provide oversight. So either this is exactly what they want or they're hopelessly incompetent.
And yes, I think we should change the incentives. We should change them such that executives face direct personal punishment for negligent or intentional harm. We should have learned that lesson during the 2008 financial crisis, but instead nobody did time. The worst that happened was that some very rich people were forced to give back a modest percentage of their gains.
>>We should change them such that executives face direct personal punishment for negligent or intentional harm. We should have learned that lesson during the 2008 financial crisis, but instead nobody did time
does not contradict anything else.
But this
>>So either this is exactly what they want or they're hopelessly incompetent.
is incorrect. Those are not the only two options. Reality is typically more complex, and more boring than that. Doing harm does not require willful malice or profound stupidity and trying to reduce it down to that does no one any good.
Not that after investigation we could find that it was really one of those two cases for many people! But 'negligent harm', which is something you want people to be held accountable (me too!), does not require gross incompetence. It can be as simple as ignoring a couple inconvenient truths and being insulated from the consequences of ones decisions, or the cumulative outcome caused by the group.
I am saying that we end "being insulated from consequences" by choosing to reduce it down to those two options.
Think of it similar to handling explosives. Are they useful and important in society? Definitely. Are they subtle and complicated, such that working with them can easily harm somebody in ways that are not foreseeable to the naive? You bet.
But when somebody decides to create and apply explosives and hurts somebody, we don't just say, "Gosh, that's very complicated. Who could have known how it would work out?" We say, "You intentionally chose to work with something powerful and dangerous, so you're responsible for the harm you caused."
Is it more complicated? Sure. And I'm saying that when it comes to highly paid executives who seek out positions that put them in control of dangerous complexity, they become responsible for the outcomes.
They are already seen as responsible when it comes to anything good that happens on their watch, which is why they get paid such fast sums. I'm saying they should be seen as equally responsible when it comes to the harms. No more of this, "Oops we crashed the economy/poisoned a bunch of people/actively enabled genocide" stuff. All of that "more complex" reality becomes their problem if they are in control of it.
What does breaking up a company actually do for the consumer? I don't think telecoms are any better for the consumer decades after we broke Bell. There is strong incentive to just form a cartel like telecoms today rather than a competitive environment that is beneficial for the consumer.
The breakup was a huge win for consumers. Long-distance rates dropped significantly due to competition and the telephone system became much more open.
And I think it dramatically aided early internet adoption. If you read "When Wizards Stay Up Late" you'll see how big a barrier AT&T was to the adoption of packet-switched networks, rather than the circuit-switched networks they sold people. How far would the Internet had gotten if AT&T had banned home modems [1] or priced early ISPs out of existence? They would have vastly preferred something like AOL, not the Internet, which destroyed their long-distance call business entirely. Look at how they behaved with mobile apps up Apple launched the app store.
Our problem was that we didn't stick with it. Starting in the Reagan era, antitrust enforcement shifted toward much laxer standards. So AT&T reassembled itself as a (smaller) juggernaut and kept going.
[1] I realize this sounds insane now, but one of the things the DoJ sued for is "Obstructing the interconnection of customer provided terminal equipment and refusing to sell terminal equipment, such as telephones, automatic answering devices or switchboards, to subscribers".
It's hard to say how things would have gone differently, but I don't think it would have been much different. Looking at the ISP world today long after the dust settled, it's not that much different from the Bell era in terms of consumer choices. I have one choice in ISP for my address. A de facto monopoly entrenched by a lackadaisical attitude towards expanding infrastructure connectivity.
As I said, our problem was that we stopped holding monopolies to account. The Bell breakup was a last major success of the old approach to monopoly regulation. The reason you have one ISP is not the thinking that brought you the Bell break up, but what came after.
'Deserve to be rich' is the wrong frame. What is a sensible procedure to decide who deserves to be rich and who doesn't? The say so of powerful politicians? 'Raised to the top via a combination of skill, luck and shrewdness' is more accurate. The fundamental problem is that the world is governed by power laws. As the size of the ecosystem grows (hello globalization) at some point it becomes obvious that no humans can effectively control the largest of the emergent entities. We need to break up Facebook, we need to break up the Internet, we need to break up the global economic system. We need to add friction back into the world. A lot of friction.
Currently the worldwide hunger and infant mortality rates are at an all-time low while population is an all-time high (but the growth rate is quickly decreasing, so the threat of overpopulation has passed). Economic growth lifts a substantial chunk of that population out of poverty each year. Are you worried that you might immiserate or kill most of them when you break up the global economic system?
> I personally know and have previously worked with some of the people who work on trust and safety, specifically for kids. Good people who have kids of their own and who care about protecting people, especially children.
Those same people are protecting their children with $300k+ salaries and buying property in area where they can send their children to Gunn HS. While I empathize with these people the direct opportunity to protect your kin should not be understated. Do they mean well? Sure. Are they putting their best effort to fix things? Sure.
Here's the most important part:
Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably. But they also know this means risking to protect their own children as a result (forced to move, lose job, less pay, etc.). What would you do? (I think I know the answer)
This can't be understated any further - in the end it doesn't matter what individual people at FB think because no one person or group of people has any legal, economical, or logistical ability to control the company except Mark Zuckerberg. He is figuratively and literally impossible to fight. Well, unless everyone deleted their accounts.
> Do you they know deep down inside that the only way to fix these things is to hurt Facebook financially? Probably.
The crazy thing is that FB has taken steps to improve things in past that also hurt them financially (eg post cambridge analytica) . They just make so much money and so fast that its like 1 or 2 bad quarters and its over.
So (1) mark being all powerful means he alone can decide its worth lower profits - he's done it before.
(2) The loss of profits probably wouldn't even matter.
I've been framing this whole thing as a universal property of human society and it seems to fit pretty well for me.
Outrage attracts attention in all group interactions. I can't think of a single large scale group forum where this isn't true. It's integral to an absurd degree in our news cycle. Howard Stern exploited this property in his rise to fame. It's a core element in state propaganda, well documented throughout human history.
I'm old enough to remember when the internet was a lot more free - when there generally wasn't some parent corporation imposing content censorship on what you put on your homepage, or what you said on IRC. All of the complaints regarding Facebook were true of internet communications back then too (on the "sex trafficking" issue, compare to Craigslist of yore!)
The big difference seems to be there's an entity we can point a finger at now. Communications on Facebook aren't worse than what was on the internet two decades ago. In fact, they're far, far more clean and controlled.
What I look to is whether Facebook is more objectionable than alternative forms of communication, and I can't find any reason to believe that this is the case. Is twitter better? Is reddit? Is usenet? No.
So why does Facebook draw such ire?
Are people calling for controls on Facebook also calling for controls on self-published websites? On open communication systems like IRC or email? Where is the coherent moral philosophy regarding internet speech?
To be honest, my biggest concern when I read the news surrounding this issue is that most of the internet might not be old enough to remember what it means to have a truly free platform, unencumbered by moralizing. Why are people begging for more controls?
I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).
But I think there are a few key innovations that make FB worse for human psychology than previous iterations. Chief among them is the algorithmic newsfeed designed to drive engagement. Outrage certainly provokes responses, but in a chronological feed situation, eventually threads would become so large that the original outrageous situation would be pushed far back and the outrage would go away. Algorithmic newsfeeds bubble these to the top and continue to show them as they get more comments/retweets/shares/etc. They reward engagement in a visceral way that offers perverse incentives.
Secondly is the filter bubble. By showing you content hyper-relevant to your search interests, you can easily fall into echo chambers of outrage and extremism. Internet communities, like IRC channels, had huge discoverability issues. Each community also usually had disparate ways to join them adding another layer of friction. Even if you were an extremist it took dedicated searching to find a community that would tolerate your extremism. Now mainstream platforms will lump you into filter bubbles with other people that are willing to engage and amplify your extremist posts.
Combine horribly narrow echo chambers with engagement-oriented metrics and you'll have a simple tool for radicalization. That way when you're thinking of committing a violent act because of the disenfranchisement you feel in your life and your community, you'll be funneled to communicate with others who feel similarly and enter a game of broad brinkmanship that can quickly drive a group to the extreme. Balkanization and radicalization.
> I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).
This is such a great point. The pre-Facebook Internet was full of anonymous random garbage. But everyone knew it was inconsequential garbage. Adding real names and likes changed all that: today garbage has gained legitimacy and is displacing prior forms thereof.
If there's one thing I've learned it's that the outrage never goes away. The type of people who fixate on outrage in their Facebook feeds are the same type of people who decades prior would cruise around town picking fights in person. I'm unconvinced that Facebook is meaningfully changing this dynamic.
I'm also unconvinced that the filter bubble is meaningfully different than what's come before. Humans have been sorting themselves into like-minded communities since before we could read and write. Do you remember the hive-minds of the 80s and 90s? If anything they were far more extreme because of the difficulty in proving anything, back before google and wikipedia. There was a lot more extremism and hate based violence back then. A LOT, LOT more, and no interventions like Facebook is at least attempting to provide.
Facebook has some new angles on old patterns in human behavior, yes. I think the people who're trying to show that it's made things work have a lot of work to do to make a compelling case. Facebook's biggest transgression is probably that it has chronicled this behavior and has dragged it into the light.
Very well put. When people say "it's always been like this" or "it's no different than X" – this is exactly the difference, and while fundamental human behaviors or impulses haven't changed, the design of the platform is changing how they are expressed.
We used to solve this problem by teaching people to have thicker skin so that we control the outrage regardless of the forum in which it occurs.
However for the last 10 years or so grievance culture has taken root and not only excused outrage, its proponents have actively encouraged it.
It makes me think of that scene in Star Wars where palpating is like “good, good. Let the hate flow through you”, expect we now have millions of people encouraging this.
How I wish we could rewind things to a world where foregiveness was still a virtue and we were all taught that sticks and stones may break our bones but words will never hurt us. Without such virtues, a world with outrage is inevitable.
I think this is an important point indeed. A piece of this puzzle, in my opinion, is that people are not taught this at home anymore. Most familes have both parents working full time and they're exhausted after work. Their kids are raised in daycare and neglected. And so many are raised in divorced/broken/separated/single parent households that compound the problem much more.
Furthermore, most of the US isn't religious anymore. These values and maxims mentioned above are not taught to people anymore, at least not to the degree that they were in the past.
A piece of this should be better training in the home for kids on how to understand the internet. To avoid being hateful and to question things. But so many kids are left to their own devices without parental oversight on this subject. I've even heard the call recently that parents want high schools and colleges to start teaching courses on how to avoid harmful content and misinformation online.
In what feels like ancient history, this used to be the parent's job, before both spouses were working full time.
Our kids and the younger generation suffer from lacking parental instruction on this.
My point is that Facebook likes are simply a manifestation of a ubiquitous social characteristic.
We all get likes. Sometimes they're called upvotes. Sometimes they're called replies. Sometimes they're cumulatively seen as our status in the social pecking order.
Facebook doesn't add anything truly new or transformative here. These problems and patterns are ancient.
The patterns and problems are ancient, but convenience is a significant factor in terms of enablement and resulting harm. Humans and other animals have been vulnerable to addictive substances for as long as we can tell, but the level of effort needed to get high was much much harder before we learned how to process and distribute addictive drugs cheaply and efficiently.
Usenet certainly does have a feedback/reward system. All group social interaction does. Trolling for feedback/reward predates Facebook by not just by decades, but millennia.
No one is calling for the internet to be less free, or have more constraints. They're calling for specific platforms to alter their interactions model to discourage toxic group behaviors at scale.
Seems like you more than anyone would see that solving the types of problems FB is trying to solve eg: freedom of speech vs user safety / harm reduction is not some super simple problem, no? I wouldn't call Reddit evil despite the fact that many powermods are both amazing contributors doing free labor and curating great communities while also simultaneously abusing their power every day to silence people they disagree with, shaping narratives in human culture, automating blanket unappealable bans on users for participating in unrelated subreddits (even if you were participating in that subreddit in order to combat its views), making snap judgments on content moderation that might ruin someone's day when they make a bad call on a ban or delete, or unilaterally self appointing themselves as mouthpieces for their broader communities via subreddit blackouts or preachy pinned posts.
It's unfortunate that when you build a product so close to the ground of human communication and human nature you're never going to be able to get everything right, and you're no longer solving technology problems alone but trying to basically combat basic human moral failing itself. We don't ask that of the telephone company.
^ That being said, we can only excuse some of their failures with the above line of thinking. Others we can blame on greed or recklessness, or ignoring the social costs of something like ML recommenders optimizing for engagement. Not sure if those things deserve to be called evil, but I'd still hold back personally. Misguided, overcome by greed, or reckless, perhaps.
Point of order: the issue with Facebook is the various engagement algorithms that they are and have been perfecting. This is unlike anything humans have ever seen before. We are no longer anywhere near to 'the ground'.
Yeah, there is a big difference between Reddit and Facebook in the above comparison. All the examples of issues with Reddit can more or less be attributed to specific people and fall more in line with "bad" human behavior. Facebook's algorithm is something entirely different in it's design - it's primary objective is to manipulate the behavior of the user on the other side, and what it chooses to show or not show doesn't follow any human line of reasoning outside of some loose built in "safeguards" and unenviable content moderators meant to serve as the guardrails.
As other have said, my experience with Facebook just doesn't mirror the "angryness" and hatred that other people are seeing. My Facebook stream is just every day things from friends I have made through the world. It is very useful for me to maintain a bit of a touch with people I esteem so much but with whom I've list touch through the years.
The "angry facebook" experience to me seems like the moms against heavy metal / twisted sister case: People are seeing a reflection of what their peers share.
If their circles are angry and share disinformation, that's what they will see.
I've also never had an issue with Facebook. I've been online through usenet/irc, AIM, livejournal, and then forced to join Facebook because everyone at the university was using it for class correspondence. Later, I have exactly your sentiments, that it has allowed me to stay in touch with people I would have lost touch with over the years. I take advantage of some of the groups for my industry, and my hobbies. I use our company's Page to interact with a whole segment of our international customer base, that would never think to call our support telephone number or e-mail. It's never been a negative experience for me. Although I only look at it when I get home from work at night on my desktop computer. And don't ride around all day with the app running in my pocket. I don't quite know if that would make a huge difference though.
What I hear from Zuckerberg over and over is "we're good people and working on it, look at A and B things we're doing" with an implication that that's good enough, so what's everybody up in arms about? That's the core of his tone-deafness to me. If Zuckerberg is fully honest, it means he basically just doesn't have a grip on reality and he isn't fit to lead a corporation this big and impactful. And I tend to believe that, because he's ultimately just a college kid with a laptop who ended up in some circumstances that snowballed.
> ultimately just a college kid with a laptop who ended up in some circumstances that snowballed
When will this “just luck” characterization of Zuck die?
His entire company was certain they should sell for $1B, and most executives resigned when he didn’t. He maneuvered control of the majority of voting shares, how many other founders have done that? Instagram and WhatsApp were genius acquisitions everybody at the time clamored were too overpriced. Even Oculus has turned out to be the leading VR platform. All of the people close to him attest to his extreme intelligence.
Whether malicious or not, Zuck didn’t just “aw shucks I got lucky” into the majority owner of a $1T company, cmon…
Nah, he's really smart, but implying 1T is a measure of his genius is ridiculous. Not only does it downplay the massive contributions of hundreds of people including Peter Thiel and Cheryl Sandberg, but it ignores the market conditions that led to thefacebook.com going viral, not to mention the Winklevosses, who got paid billions in today's valuation. Do you believe that if Zuck never met the Winklevosses, he would have necessarily built a 1T company anyway, because quantity X of genius must necessarily manifest to F(X) valuation? I think the market violently disagrees with you.
I'm disagreeing with the "just a college kid" portrayal. There are of course a few circumstances greatly helped the trajectory, as did many other smart people help it along. What I'm trying to imprint is that without Zuck being very intelligent, the level of success Facebook would've had would be far far smaller.
> So maybe Zuck is telling the truth here, that they are trying to fix all this.
Except they are just playing around with the outrage algorithms, the problem is created by Facebook, not some natural occurence. If they wanted to "fix" anything they would make their algorithmic timelines opt-in, or at least an option, for starters.
It is of course very much in the interest of the people working at Facebook to make this seem like a problem that is just there and that it is some "difficult to solve", that "moderation doesn't scale" etc.. These are deflections to make everyone ignore that Facebooks tampering is where it starts.
This, their entire premise of modifying their engagement-optimization to try to account for wellbeing but still optimize engagement is flawed. It’s clear that outrage and anger drive engagement over all else. If they want to fix things they can just bring back chronological feeds; but they won’t because the incentives are just too misaligned.
I know YouTube’s just recommends based on what you just watched/search (you can disable this aspect by clearing or disabling your histories), channels you have subscribed to, (I believe) videos you have “Like” or commented on, and videos you have marked as “Not interested -> I do not like this video”.
Is Facebook’s as “viewer driven”? Or does it recommend based other criteria? e.g. like what’s generally popular.
Good people have gone to work at facebook (and google) on jobs like privacy engineering and really try to do good work.
however, no matter how capable and ethically sound they are, the incentives are forever misaligned with the profit models for both, and adtech over all, as it currently stands. truly good people can chase money and hope to do good things in the process. it's as easy as this.
the writing was on the wall when alex stamos, by all measures the best example of the type of person you're referring to and FB's chief security officer... left. started in 2015 and was out of there by 2018. not many c-levels walk away from a job like that for the reasons he did, and when they do that should be the even to pay attention to (looking at you, sheryl "lean in" sandberg). this was the marker event if people were looking.
If by "established" you mean that it's well known, then yes, you're right. If instead you mean that it's agreed-upon or widely accepted, you'd be wrong. There's a lot of great debate / critique, both about how well the phrase actually applied to Adolf Eichmann himself (Arendt was famously only at the trial for like 5 days), and whether evil in general is ever, in fact, all that banal. Sadly the conversation around "the banality of evil" hasn't received a fraction of the attention that the phrase itself has.
>"Arendt was famously only at the trial for like 5 days..."
Besides David Cesarani's "Becoming Eichmann" book from the mid 2000s where he stated Arendt had "only saw Eichmann in action for four days", are there any other references that support this? I've not been able to find any. Also the "in action" in that context specifically refers to Eichmann's testimony not that amount of time Arendt spent in the court room. I'm not sure how "famous" this is. Elsewhere her correspondance with her former teacher Karl Japsers indicates she was there for 10 weeks. That would be about a third of the 8 month trial.
I don't think this idea matches what parent poster is saying.
Banality of evil is about how ordinary people can work on evil things while not being sociopaths and still being considered ordinary people. But it also presumes that there is some truly evil / sociopathic force driving this through authority, such as Hitler himself in case of Eichmann.
On the other hand the parent poster is saying that Facebook is simply too big to not end up evil, that evil is an emergent property of the million different processes that is Facebook. That view absolves not only regular workers of Facebook who are helping the company achieve evil things, but also the people who are actually in control of the company – Mark Zuckerberg and his senior executive team.
Personally I'm not buying either of these absolutions, but especially not the grand universal absolution that the parent poster affords to the whole company.
Ultimately it is someone's decision to put profits above everything else. Engagement doesn't excessively optimize itself. Users' contact books aren't getting stolen by themselves. Shadow profiles don't fill up themselves. "Just doing my job" is a choice, not an excuse. Many people are complicit in making and implementing these decisions for their own benefit, and they are all responsible for the outcome.
I don't think that's what the OP means, though. It's not "decent" people doing evil things. It's great people doing great things, within an organization that also does bad things.
There are some amazing people on their safety and moderation teams. They're also fighting marketing algorithms, I'm sure.
Eichmann in Jerusalem is the book that coined the phrase for anyone passing through, and it's a pretty wild story.
It's essentially Arendt, a Jewish exile from Berlin who fled the holocaust, wrestling with her realization that Eichmann, who reported to Hitler and organized major portions of the holocaust, wasn't a psychopath, but a completely mundane and thoughtless career focused bureaucrat who was trying to rise in government and believed in doing what you are told, who then organized one of the most evil acts in human history without reflecting on what he was doing.
It’s like saying that the people working in the slaughter houses are actually kind folk who do like animals and care deeply for their well being. That can be absolutely true, but they still work for a slaughter house. Your care and trust doesn’t matter a bit because the fundamental nature of the organization is that it profits from cruelty. I understand it pays well, and that maybe they are trying to be nice and all, but yeah there’s only so much purity of heart you can insist while still working for the slaughter house that is Facebook.
I actually think this analogy is the very opposite of what you may be trying to explain.
A lot of people that work at slaughterhouses do so because they have no other choice. It is the best opportunity that's afforded to them. It is a job that causes trauma for many, often has long, grueling hours, and doesn't pay well.
Working at Facebook couldn't be further from that situation. Never mind the obvious perks (the tech, the white collar work, the gourmet food, I hear there's also a wood shop where you can go do woodworking on your break, the half a million dollar salary, etc etc etc). But the overwhelming majority of these people have the whole world of job opportunities to choose from, if they're willing to take a pay cut from an INSANELY HIGH salary to just a VERY HIGH salary.
So in that sense, they couldn't be further away from working at a slaughterhouse. The fact is, they could quite literally work anywhere else (any other company or any other city/country with remote work now), and they choose not to. It's not desperation but the textbook case of golden handcuffs.
It's very, very difficult to say no to 500k a year. I'm not even sure I could say no if I were in that position. I'd probably tell myself "Just coast for two more years and both my kids won't have to pay for college" or something like that, and keep going.
I have said no to a Facebook offer before (I actually recommend everybody apply to Facebook, they give great offers you can use for negotiation elsewhere and it wastes their time). Like you said, we’re talking about the differences between insanely and very high here. I don’t think a 20% increase in TC is worth making the world a worse place, and I’d hope for most people that’s not a hard decision.
> So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
Don't fall for words!
Frances Haugen was able to see the big picture. The documents she presented had Facebook employees mentioning it. Facebook didn't act on what was known. It is not that it wasn't known.
To paraphrase John Roberts - the only way not to do a thing is not to do that thing.
It's systems-level thinking applied to people. Reader beware though, once you start down this path, you can become adept at spotting this pattern emerge in many other human systems.
The statement that made it really clear to me was facebook has moderators for 50 languages... while supporting 111 different languages [1]. It's wildly irresponsible to offer services in a language you can't moderate in.
And it's sure seems an intentional part of their fig leaf denial strategy -- viz the recent revelations about human trafficking on fb in arabic [2]. Or armed groups in Ethiopia inciting violence on FB in ways that fb chooses not to monitor because of language issues [2].
A company with 21Q2 revenues of $28.5B can't hire moderators in languages spoken in countries with low costs of living... It reflects a thirst for growth with no thought given to the people affected by their growth.
That’s sort of how things are where I work. The systems are so complicated and the interactions are often algorithmic and machine learning based. We try to maintain documentation and architecture artifacts with as much accuracy as possible. But in some cases things may as well be magic because no one really understands the whole process.
The human element is not a variable we define in code. There are things that, by the nature of how they're used, become harmful. Intent does not matter. Good people can intend that their new free anonymous file sharing service will be amazing. Until it's used by bad actors. The concept is good, the intent is good, but in practice it doesn't work that way.
There's also another concept, the reality that people do not actually care as much as we think they do. There's a program every public school in the U.S. has where kids run at each other, at speed, knock each other to the ground with concussions, tear their muscles, break their bones, and have terrible behaviors towards one another. Yet every school still has said program. Parents encourage their kids to join. We just don't care about whats right.
Not all evil looks that way to outside observers, unfortunately. I believe that the assumptions of FB that allowed it to get so big, "optimize engagement above all else", built a system that in many ways is at odds with the values of our society when everyone is a user.
Internally at FB, everything looks good, you hit all your OKRs and believe users are better off. Maybe you don't, but you're bonus is huge so you'll put your head down and keep on. Externally, it's an entirely different picture. Connecting people is a comically small issue society needs FB to solve, relative to our need for them not to harm children, or promote extremism, or hide research when testifying to congress.
> The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening
> so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
I think, honestly, that a huge thing is that when you put together basically the entirety of the internet, and society into a giant conversational feedback loop you're bound to spin out the worst, especially if FB wasn't 100% of the time trying to filter it (which they weren't because its a business and the problems weren't always equally known).
What I'm saying is that I know people working on this projects, and they are good people who want to make things better. They wouldn't work on these things if they didn't think it made a difference, as they all have plenty of options on other places to work.
It's ridiculous to think a platform that most of humanity uses can be controlled to the liking of the left, the right, the upside down and etc ... cause all those groups make up humanity and we all do not nor will ever think exactly the same and we all have different motives and biases.
Misinformation that's been around forever... ever play the telephone game in school .. you tell one person a story they tell the next and the next and the next soon that story is no longer factual. Stir all that in with bias and things get even murkier!
I'm reminded of Terry Pratchett's image of the row of mugs (with cute little sayings) owned by the torturers of Omnia's Quisition.
This is a generally hard problem but it's as significant now as it was in the aftermath of WWII. I'd say it speaks to the reality of human subjectivity, and it never goes away: I can only wonder if the same will be true of AI, and whether it's possible for a thinking being to really internalize the concept of hard limits to their perception, and build that into their model of the world.
You could say the God concept is a way of trying to internalize the limits to perception: 'something is vastly significant and it's not me, and my understanding does not and cannot encompass it'.
With OR without this concept we as humans are exactly as evil as each other. That's the secret. There isn't a qualitative difference between 'us' and history's great monsters. It's about the choices we've made and how we've acted on them: the rest is rationalization, which we are all subject to in one way or another.
Grappling with this is the Nuremberg moment: the question is 'never mind whether you feel you've been good, what have you done?'
The complexity of the system is too great. It’s similar to how the economy runs, there are many very intentioned, intelligent (event brilliant) people who study and focus on it. The problem is it is so complex that no one can fully understand all the components. Not to mention the amount of people in Facebook and in the economy who are intentional bad actors.
I’m not saying they are blameless I just always have a tough time laying all the blame on a couple people.
I have a friend who started work there in the last three years.
It’s so big and so organized, they can come up with an idea for a new service or policy they want to implement and it takes roughly two weeks to get all the channels to approve and move forward on the idea. Implementation is different, this is just getting all the approvals from legal, finance, marketing, etc..
They are definitely in a position to make changes quickly should they need to.
Sometimes it is not the virtue of people in the organization, it is a function of the structure and incentives of the organization, the "emergent property" that you reference.
Imagine if a company had invented methamphetamine, but the ill effects weren't as readily apparent. Then they built an empire on the belief that the societal benefits of millions of people running around in a seemingly ultra-productive manic state were a godsend to society, and that they had truly changed the world. Then realize that the effects of Facebook are worse than that--it has the opposite effect on productivity, has maybe worse mental health effects, and is nevertheless highly addictive. The reality would never sink in inside that bubble. Worse, the tens of thousands of people whose jobs and wealth depending on tuning said meth to be as addictive as possible are...what? Pawns? Believers? Accomplices? Delusional? Regular people. They are regular people.
Its been said that this psychopathic behaviour is an emergent property of many corporations and emerges due to the nature of their very legal structure. In other word, the people may be fine but the outcomes can turn out not to be. See...
"The Corporation attempts to compare the way corporations are systematically compelled to behave with what it claims are the DSM-IV's symptoms of psychopathy, e.g., the callous disregard for the feelings of other people, the incapacity to maintain human relationships, the reckless disregard for the safety of others, the deceitfulness (continual lying to deceive for profit), the incapacity to experience guilt, and the failure to conform to social norms and respect the law"
They're in a tough spot by design. Much of Facebook is private. How can they possibly be transparent enough to satisfy critics about what actions they take? Share too much and another Cambridge Analytica situation pops up. Share too little and researchers decry coverup over lack of access.
The problem with facebook is that it plays with fire every day. Kills innocent people every day. But they have a fire department, so they can't be all bad right?
If you can't do what you do in a way that isn't this harmful to the world, then you always have the choice to just stop it.
These are all smart people. They could be working on anything else and be successful at it. But they are scared of change. The money is too good.
I just wish the people working at facebook that are decent, that they would just leave and go work somewhere else.
And that we stop debating whether Zuckerberg is redeemable. He is not. He is a psycho. He is why it escalates this much. He is a monster. Beyond all the lies, he intends the damage he does to the world. Maybe someone bullied him as a child. Maybe he is just not well. I don't know.
It’s easy for me to reconcile after living through COVID. There are people in my own family who have emotionally told me they’d never do anything to hurt their family. Meanwhile throughout the pandemic they have purposefully hidden when they have been sick and spent full days with their elderly family and immune compromised 3 year old, touching food and participating in cooking. There is a big difference between emotional and cognitive empathy.
I also think the people who make the biggest show of how much they care tend to be the same who don’t actually act in a caring way at all.
No, I’m not surprised at all that FB employees say they really care. And that they do so very convincingly.
Couldn't it be possible that the people you know try hard but are limited with what they can do because of policy and decisions that come from above? Stuff like hiding research that looks bad isn't something that a dev or even a manager decides.
Who are the people to be bold enough to speak for "everyone". You are definitely not speaking for me. I personally get a lot of value from facebook. I never had any problem with it in any respect. Use it to communicate with my family around the world. Used it to rent my apartment, sell thing on marketplace. I keep in touch with people I know. And have very thoughtful and enlightening political discussions that help me do the right choice who to vote for and stay informed. (The only other place with better discussions is hacker news thought)
There is only one way to fix this, prevent anyone from influencing what is shown more prominently to users. The simplest solution for that would be simply chronological order only from your friends.
"The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening."
I half agree. I do in fact think its ben baked in from the get go, just that there was a period where it was not an obvious pillar; you could in fct do all kinds of essentially innocuous things and accept some surveillance capitalism with awareness of your wn liabilities.
It's now become so much larger and problematic, that the 'emergent property' is that every move adds weight to the need for the firms dismemberment into smaller units, or punishing regulatory limits. And I mean truly brutally, snap-noise making bone-breaking regulations.
"So maybe Zuck is telling the truth here, that they are trying to fix all this."
Nope. He knows if he wants to stay on top, he had t keep doing mpre of what hes done, and that his choices are otherwise to actually adapt, which he will not do.
Shouldn't it then be possible to account for and correct for the emergent evil? That's the point of government regulation is it not? Maybe then an appropriate, self-critical response from Facebook would be, "Yeah, our system is broken. How can we help?" instead of immediately going on the defensive. If they claim to care about the bigger picture, they need to acknowledge it without excuses.
At this point of the climate disaster those are getting very rare especially in oul. Maybe those, if they exist, who are working hardest at stopping most oil production and enacting cap and trade.
If a company desired to be able to sow doubt if its impacts on society ever came under a microscope... one gambit (and an effective one, based on your reaction) would be to hire people who genuinely and passionately research and work on trust and safety, then systematically under-resource their teams and gaslight them into thinking there are fundamental reasons their recommendations must be ignored.
For instance, contrast Zuckerberg's statement here:
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
> The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It's also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that followed.
> She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.
The fact of the matter is that if Zuckerberg were to say "I'm going to pour our profits into trust and safety and abuse avoidance in order to ensure that our position as a trusted brand is sustainable for generations to come," his high levels of voting control and clear defense to any allegations that this was against long-term shareholder interest would fully make that possible. The fact that quite the opposite has happened should be considered with much more weight than his words in a reactive press statement.
There is a German saying: "Der Fisch stinkt von Kopf her". 90% of all employees may very well have the best intentions, but this doesn't mean anything if the decision makers have not.
A company is not a democracy.
Indeed, we have probably just seen one of the (former) employees with good intentions struggle to stay true to them.
I mean, that's what it might be. This might be the "banality of evil", an emergent property of social networks themselves. If this is the case then we have a harder question ahead of us as an entire world: how do we fix the problems of pandora's box?
So if it a form of a modified Hanlon's razor, "never attribute to malice what can be explained by lack of capability" particularly because they are too big, it sounds like the answer is to break them up so they aren't too big. Is that the solution?
So, anecdotally, the most amoral programmers that I've ever worked with have ended up at Facebook. I'm sure there are decent people there, but I couldn't personally work there in good conscience.
I don’t think the existence of good people or what any one’s intentions are matter at all. I doubt anyone can change the course or Facebook. The stock price runs the show.
The disconnect is that Facebook is coming at this with the assumption that it is right and proper for Facebook to exist. The rest of us don’t make that assumption. So “how can Facebook best serve kids” might be “withdraw from routing tables permanently” but that isn’t on the whiteboard in Zuck’s office.
FB seems guilty, only because their internal findings were leaked.
I have no empathy for them. They bring out the worst in Humanity. They build walled silos of festering hate and anger, all driven by "user engagement", "hours on site" and money.
Just remember Rationality is Bounded, ie there are problems chimps with 6 inch brains cant solve. Its the classic Jurassic Park story, where man says he can control anything. And then realizes he cant. By which time its too late.
This is why the road to hell is paved by "good people who have kids" with their good intentions.
FBs issues did not appear yday.
Like the endless war the issues where there right from the start. So why are we talking about it today? Cuz lots of good people didnt do anything, not because they arent good or skilled, because the problem is too complex for them.
This is where Bounded Rationality helps resolve issues. If the problem is too complex, pick a simpler problem.
This is hard for some chimps to do for various reason. So entertaining them is a recipe for disaster. Their narrative will always be- "people are good. People experienced World War 1. They know whats at stake. They lost family, friends, body parts. Many are great Heroes. Trust them. They know what they are doing". And still we got World War 2.
Why? Cause rationality, skill and experince doesnt matter for some problems. All the "good germans" from politicians, to religious leaders, to military and intelligence leaders knew Hitler had to go long before any notion of war entered the minds. Every coup and assassination they ploted they second guessed themselves. All of them ended up dead.
(Note: "Connecting people" here does not mean providing communications services, it means using behind-the-scenes, unconsented, and sometimes deceptive tactics to figure out whether and how people are connected to each other IRL.)
Andrew Bosworth
June 18, 2016
The Ugly
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people
That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps
the only area where the metrics do tell the true story as far as we are concerned.
That isn't something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.
That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable
by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best
products don't win. The ones everyone use win.
I know a lot of people don't want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake,
growth tactics are how we got here. If you joined the company because it is doing great work, that's why we get to do that great work. We do have great products
but we still wouldn't be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product
decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.
In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren't losing out on a bigger
picture. But connecting people. That's our imperative. Because that's what we do. We connect people.
When this statement leaked, he made the bullshit claim that he was playing devil's advocate. He certainly wasn't. This post was made at the same time as another leaked one about messenger adding a deceptive interstitial to get people to agree to share their number and contacts with FB.
The hard irony is that Facebook is just another mechanism to fragment people. It is no different than these other "borders, languages, and increasingly by different products".
It seems that the author is operating under the assumption that if everyone is inside of their product, the world won't be fragmented anymore. People will be connected.
Yes. They will be connected. To the product.
We can do better than this dreary future. It is possible to connect people as peers, without the exploiting hands of intermediaries like the executive who wrote this statement.
Zuckerberg is making an almost entirely emotional appeal in his statement. Most of his claims are not backed up / buttressed with facts, numbers, and specifics. The statement is designed to make a reader feel bad for Facebook as if Facebook was a friend, and not a corporation with billions of dollars in quarterly profits.
Though the statement seems well-meaning, etc., it is weaselly and manipulative. It also conveniently doesn't address some of deeper issues from Frances Haugen's testimony.
For example, Haugen focused on the fact that Zuckerberg has created a relatively flat organization, where if decisions help the core metric they must be good, and vice versa. Haugen testified that Zuckerberg was made aware that instituting a newsfeed tweak would entail a) small ding to the core engagement metrics and b) would decrease violence in Ethiopia... He chose the metric over the decreased violence.
There comes a point where blindly pursuing metrics -- be it money or engagement -- without regard to the effects on society are hard if not impossible to distinguish from sociopathic behavior.
Also, let's not forget that researchers and renown statisticians employed by / sponsored by Big Tobacco (e.g. R.A. Fisher) convinced themselves that smoking didn't cause cancer. [0]
How so? He asserted several apparently factual claims that would basically undermine or make irrelevant most of the commentary in this thread, for example:
- Social media can't cause "polarization" because the measurements of that are going down in most of the world, except the USA. But social media is heavily used everywhere.
- It makes no sense to claim an organization doesn't care about X when it heavily funds research into X.
- If you react to a company researching the harms of its products by leaking everything and publicly accusing the company of being evil, other companies will simply not do research into the harms of its own products.
The second two are just logic. The first would benefit from a citation but I'll take his word for it.
Ah, great question. The pattern I'm pointing to is a little subtle. I think at this point it pays to be extremely sceptical of Zuckerberg.
(Quoting from Zuckerberg's original post)
> "And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?"
This certainly needs a citation. Ethnic conflict in Ethiopia, division in Britain around Brexit, and US polarization would seem to be obvious counter examples. It should certainly be said that correlation is not causation. Also, note that the claim that 'social media doesn't cause polarization anywhere / everywhere in the world' is a subtle bait and switch from "Facebook causes polarization in certain areas" or "Facebook's lack of robust, well-staffed safety mechanisms allow it to be exploited to cause polarization in certain areas."
> "If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?"
There is more than one way to care about a research program, but absolute amount of budget spent on X is not the same thing as relative budget priority. For a company that made 54 billion in profit last quarter, it'd be more surprising if they had no research program. Zuckerberg does not present any specifics here -- what percentage of gross revenue is the research program? How many people are employed to fight content, and how does this compare to how many people are employed to encourage growth? And what's the point of research if the results are not acted upon? The whistleblower was pretty clear that the research doesn't matter even if suggested fixes will cause a <1% point hit to the core engagement metrics. Does Zuckerberg have any specific facts about how many times civic integrity / safety based suggestions was prioritized over the core metrics, other than the one example he cites (Meaningful Social Interactions)?
Speaking of Meaningful Social Interactions (MSI), the whistleblower specifically said that there is a foundational problem with how MSI is defined, because it includes the number of comments a post receives. Even without intending it, it is easy to see that controversial posts will attract more attention. Zuckerberg cites no evidence about the relative percentage of comments that are angry vs other emotions, and how this has changed.
> "That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you."
Zuckerberg is complaining about incentive problems? The whistleblower has said that Facebook's very policies make it "not care" even if individuals do. This is also what comes across in the WSJ articles. In other words: the narrative isn't false, and this has been documented. His point about the specific incentive problem of leaked research is interesting, but it's a case of an abstract concern (for other company's research) vs. very real and well documented harm Facebook is a) doing now, and b) per the whistleblower, is unequipped to solve alone.
Also, at one point in Zuckerberg's missive, he shifts the locus of responsibility from Facebook to Congress: "..at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services?" Deciding what the "right" age is can take several forms. A panel of seasoned jurists, child psychologists, policy experts, etc., can spend a long time debating what the "right" age is in the universal sense. Or, Facebook could take stand, err on the side of caution, and say that 17 is a better age than 13, and detail why they think so.
I'm British. I don't think anyone in the UK has tried to argue that disagreements over Brexit are caused by Facebook. Actually the whole idea would sound kind of absurd. People disagreed over Brexit because:
1. Some people disagree fundamentally over the nature of government and how power should function.
2. Some people were afraid of various kinds of "punishment" or instability that they were told leaving would cause, even if they would have supported it in the abstract.
Neither of these have anything to do with messaging apps or social media. As for ethnic conflict in Ethiopia - ?!?! - seriously?! That part of Africa has been a hotbed of bloody tribal conflict for my entire life. It's driven by the local culture, I seriously doubt anyone there gives the tiniest shit what people post on Facebook.
This is Mark's point. It's not a bait and switch to point out fundamental inconsistencies between other people's theories and the wider world. The idea that Facebook is some unique social evil that causes people to disagree just looks very odd from outside the USA, looking in. It's being made a scapegoat for US social problems. Everywhere else when people fight, they are well aware what they're fighting about and why.
Re: research. You seem to be arguing that yes, they spend a lot of money on this issue but it's not enough, whilst also admitting you don't know how much they spend. You're just convinced it's too low. But this is meaningless: research programs have natural costs and you can't simply double a budget and get ... what? Conclusions that are twice as "good"? Same conclusions twice as fast? It doesn't work like that.
Nor is research guaranteed to result in actionable outcomes. Look at their conclusions around Instagram. Some teenage girls said it made them feel worse, but more said it made them feel better. What's the actionable outcome here? Unless there was an incredibly specific kind of thing the girls who felt worse were seeing, there probably isn't any plausible action, and if there was some sort of specific content that made people feel bad, removing it would just be used as further proof of their guilt: they're manipulating the feed to increase engagement!
The rest of this thread is all like that. You start with take that is itself controversial and extreme, like "people talking about controversial topics is inherently bad and Facebook should suppress it". Then when Facebook pushes back and points out that actually, lots of people like talking to each other, including about politics, they are cast as villains.
This has all the trappings of a purity spiral. No matter how much effort Facebook makes, it's never considered to be enough. Activists who aren't quite sure what they're trying to fight or why, insist on ever more moderation in the hope that somehow this will cause other Americans to all start agreeing with them. The result is stuff like XCheck, an unstable downward spiral in which ever more aggressive moderation policies force ever more people to be exempted from them, lest the incoherency becomes too obvious.
Thanks for your comment. You make some good points. Zuckerberg's comment about the incentives of leaking research is certainly worthy of consideration. And while I don't have first hand experience with Brexit, I do not mean to claim that the disagreements were caused by FB. Only that FB may have had a role in causing people to become more entrenched in their positions.
One of the points I'm making is that Zuckerberg's statement lacks specifics in the form of numbers and data. I think it'd be interesting to read a point-by-point rhetorical analysis of his statement.
Also, because of this, yes, I don't know how much Facebook spends on research. I agree that though money and research quality are quite likely correlated, it's very hard to say by how much. That being said, I care a whole lot more about the values of the company. Haugen's testimony paints a textbook picture of a values problem. The whistleblower has repeatedly said, under oath, that Facebook understaffs its security and safety teams, and that they turned off the safety and integrity protections after the election, and more.
It's also true that civic divisions in the US -- not to mention other social problems -- run much deeper than Facebook. One mechanism people like me are concerned about is how users are recommended people to follow or content that results in either more division or them being led to a more extreme version of their views. In her testimony, Haugen gave the example of how indicating an interest in healthier eating on IG can lead recommendations of anorexia / eating disorder content. Saying that Facebook's engagement-based-ranking has nothing to do with promoting civil divisions seems to me like saying that the Youtube recommendation algorithm a few years back had nothing to do with the rise of the modern flat earth movement. Researchers have evidence that it did [0].
As for ethnic conflict in Ethiopia, I only bring it up because of Haugen's testimony. As this Guardian article puts it, "Haugen warned that Facebook was 'literally fanning ethnic violence' in places such as Ethiopia because it was not policing its service adequately outside the US." [1]. Your comment does make me wonder how many people in Ethiopia have access to the internet though.
This is a slight tangent, but it's also worth mentioning that re: IG and mental health... we don't know about other research, like about any further attempts at a causal study -- most of what's been cited is correlational and comes from small sample-sized interviews. So it would be nice to see larger and more rigorous studies. I don't believe that research should stop with question "Is Instagram Harmful." Of course that's going to have a mixed answer when dealing with large masses of people. "Who is susceptible to being harmed?", "By what mechanisms is IG harmful to some people?" etc. are questions that need answers.
I also disagree that people are so biased against FB/IG that anything they do will be seen in a bad light. Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud. And though I am not an activist, I'm generally interested in (the enabling of) wholesome discussions and interactions, i.e. things that promote a feeling of being in a community / society rather than feeling apart from it.
I think part of the disagreement here is you see a whistleblower, but I see an activist. One who frankly, if I were Zuck, I would have fired or simply never hired in the first place.
Arguing that Facebook causes tribal conflict in Ethiopia by not "policing aggressively enough" or "understaffing" teams is not, to me, the argument of a whistleblower. It's the argument of someone who has totally lost perspective, of a totalitarian who believes that any and all of humanities ills can be fixed by manipulating communication platforms. It's no different to saying "if the phone company cuts off any phone call in which people are arguing, there will be no more arguments and everyone will be happy". When phrased in terms of slightly older-gen tech it is obviously absurd.
"Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud"
Good on you for being consistent then! Sadly it seems to be very rare. Look at Zuck's post. He points out that Facebook did in fact make changes to prioritize stories from friends and family, even though that reduced their income and reduced the amount people used the site i.e. a lot of users were actually people who don't care much about their cousin's cat pictures, but do care a lot about civics, or phrased another way, "divisive politics".
Yet it doesn't seem to have done them any good. For people like Haugen and a depressing number of HN posters it's not enough to re-rank nice safe family stories about new babies. For them Facebook also has to solve teenage depression, war in Africa and probably world hunger whilst they're at it. And if they aren't it's because they're "under-staffing" or refusing to "adequately police" things.
My perception is that people aren't expecting facebook to solve teenage depression, but to prevent themselves from contributing to it if they are. FB's research has been criticized by scientists as being of poor quality [0], and Zuckerberg claims the findings were cherry picked. This actually good news for FB if true. Should they partner with neutral, third party university research teams, as well as commit to a transparent investigation, they'll be able to clear things up. Not everyone would agree, but I believe that many people are capable of changing their minds when presented with new evidence.
The metaphor of a phone company cutting off an argument is an interesting one. I agree that people arguing is a fact of society / nature, and I also agree that cutting off a phone call seems like an absurd way to try and solve the larger problem. But at the same time, I don't think the metaphor fully applies, for the following reasons:
First, a phone call is a one-to-one communication, and Facebook is one-to-many. It's rare if not unheard of for strangers to call each other and say what they think about, for example, a NYT article. Second, there is no recommendation system pushing "engaging" subjects, where engagement can be defined in terms of how controversial it is. Third, only 9% of FB users speak english, and Haugen testified that the non-english safety features, tweaks to the ranking algorithm, and tooling are not as good (potentially drastically worse?) in non-english languages.
Most people would argue that phone companies have some responsibility to prevent spam calls, similar to how an email services prevent or flag spam emails. These are network level actions, and a lot of Haugen's testimony was about how FB was being irresponsible in this regard.
"Ethiopia violence: Facebook to blame, says runner Gebrselassie" This is the headline from BBC in 2019. It makes me so angry and upset. If facebook was run ethically how much smaller would it really be? 10%? 20%? I can't help think that although they would lose some customers they would also gain others.
> Now that today's testimony is over, I wanted to reflect on the public debate we're in. I'm sure many of you have found the recent coverage hard to read because it just doesn't reflect the company we know.
Yeah, we know you don't know, because you're looking at it from on top of a mountain of 100 billion dollars, Mark. There isn't a single damned thing that can change your picture of it being the absolute greatest thing ever.
> We care deeply about issues like safety, well-being and mental health.
What you care about, and what you say you care about, are nothing compared to your actions.
> It's difficult to see coverage that misrepresents our work and our motives.
We don't know your motives, other than the obvious ones. We know your actions.
> At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
That's because you live in a big dumb bubble where chat apps are somehow world-changing innovations and creepy stalker behavior is completely fine to you. You are out of touch and people are screaming it at you. You think you are entitled to encroach on everyone's private lives, intermediate on every interaction and mine it for vulnerabilities to auction off to advertisers. Your entire model of the world is broken, Mark. No wonder nothing makes sense.
Stopped reading after this point. I'm sick of billionaires with megaphones blaring their virtues.
> I'm sick of billionaires with megaphones blaring their virtues
As a society, the US has shifted its values from intellectually sound principles, to what ever rich people shout out.
I vomit in my mouth when I see videos of people showing currency, of people talking to you about "doing the hustle", etc etc.
US has fallen into an abyss of moral decline, where the value of your words are directly proportional to the amount of wealth you have managed to gather, no matter the means.
> As a society, the US has shifted its values from intellectually sound principles, to what ever rich people shout out.
Nothing exemplifies this as much as the whole situation surrounding public transport in the US.
A topic that's generally scoffed at "Everybody has a car, why would the US need a high-speed rail network?!"
At least until some billionaire presents his newest "innovation" by putting people in some pipe or another and allegedly making them go 600+ mph with magic inertia dampeners, then everybody loses their their collective poop about this amazing idea, by that amazing entrepreneur!
Then they end up with a bunch of cars being driven trough a tunnel, still no high-speed rail, but can't wait to chase after the next billionaire promising them to shoot people trough tubes at deadly speeds.
Would be excellent satire if it wasn't actual reality.
> Of the 17 presidents from the last century, only three (and just nine total) are listed as having a net worth of less than one million
You might want to quote the sentence at the top of the Wikipedia table in your link, that says this is their "peak net worth" and "may occur after that president has left office".
Read through the list in the link below. McKinley, Wilson, Coolidge, Hoover, Truman, Eisenhower, Nixon, Ford, Carter, Reagan, Obama and Biden can't be considered to have been born into families of wealth.
> You might want to quote the sentence at the top of the Wikipedia table in your link
> can't be considered to have been born into families of wealth
And you, in turn, might want to quote the sentence at the top of my previous reply where I state that I am not talking about being born into wealth. 12, by the way, is still a small minority. The majority of American presidents were millionaires. It's pretty cut and dry.
So agree. Comes off as super cringe. Silicon Valley has lost amy moral high ground from the early days and should act like any other corporate... "its not illegal and it would lose us money to change, why would we change it?" That would be honest, logical and frankly refreshing
Sorry, what "moral high ground" did it ever have? SV is just a place for smart people to create interesting new toys, and has developed an incredibly predatory and monopolistic VC-funded business practice in the process. For some reason, its inhabitants have determined that this makes them morally superior to everyone else.
I believe Zuck became billionaire for creating value for people.
For creating something from nothing. So if he has 100 billion, it's just a tiny fraction of the value the society got from him in return. I wish there were more people like Zuck, Elon musk etc.. these are the people that advance society.
I read so much hatred towards rich people here instead of praising them, it somehow gives me the chills to know there so many people around me that are full of baseless hatred to the point that they are "sick".
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
That's a point I've made myself before, and I think there is some truth to it. Social media can't explain the high and increasing degree of US political polarisation, because other countries (including other English-speaking countries) consume social media about as much and yet have less polarisation, and don't show the same degree of increase in it either. The real explanation must lie in other aspects of US culture, or the US political system
That is because most other countries have multi party systems.
The polarization is still there, but spread thin amongst various factions.
In the US, people are shoehorned into R or D.
Edit : I would also like to point out that the OP is a bit confused between cause and effect. In the US, the effect is deep polarization. However, the cause is the power of mass communication, especially misinformation and blatant lies, that FB enables and does not bother to control. The cause is common to all the countries in the world, the effect varies due to various other factors, one of them being the presence of a multi-party system.
Take the example of India. FB has a large and active user base. However, India being a chaos of various identities, cultures, regions, languages, etc, divisions in society are less pronounced as there are a large number of players (politically, regionally, locally, etc)
Other than that, the effect of FB in Europe is also less visible due to the same reason. Every EU country has mostly multi party systems, leading to spreading thin of the hate and focus.
Because Canadian electoral system is also FPTP just like US and UK. Our election results already reflect this: decades upon decades of essentially only two parties ever being in government and almost no coalition governments.
Most European countries have functional multiparty governments because they have some variation of proportional representation.
The UK doesn’t have a true multi-party system. First-past-the-post elections all but ensure there are only two dominant political parties. It’s really only been a game between Labour and Conservatives for longer than most of us have been alive.
The Lib-Dems were part of Cameron’s coalition government if memory serves. Ironically the former leader of that party (Nick Clegg) now works for … Facebook.
And, you have deep divisions there. If you perceive one single enemy, all your efforts will be focused on that one. If you have three, your efforts are spread out.
This makes the divisions, rhetoric and social environment seem less severe, but the problems are just as big.
Not sure if I fully agree with that argument. With the US being a superpower, the stakes are high and this invites unique foreign influence via social media. That foreign influence could be state coordinated or just come from regular people who have an emotional stake in US politics. I've seen it myself, outside of the US people are more concerned about US politics than their own and engage with it on social media which only adds to the pile of insanity. Also, because of limited bandwidth, only a small handful of topics tend to monopolize, and as such they are usually US topics.
I wasn't aware there was any question that foreign groups (state supported or otherwise) actively coordinate on Facebook to influence US political campaigns.
Most of the research and claims you see about bots manipulating people on social media fall apart when examined. For example they often rely on a badly trained ML model that labelled nearly half of Congress as "bots". This sort of thing is never admitted in the media - if you don't double check for yourself you'd never realize.
Another version of this is extremely sloppy methodology where users are labeled "influencer bots" for merely having certain foreign IP ranges and being active in discussions about certain topics.
Twitter in particular has been banning tens of thousands of accounts based on that flimsy and circular reasoning.
And because the affected people are locked out of the only system, that would realistically allow them to draw attention to the problem, they are all out of luck in bringing attention to their situation and the problem.
That's a good example of the problem. Well, it's not about bots, but the same definitional and logic problems are evident. The story defines "troll farms" as "professionalized groups that work in a coordinated fashion to post provocative content, often propaganda, to social networks". That description is so vague it could describe almost all news outlets and political parties, along with many charities. But, they aren't going to classify CNN, PETA or the White House itself as a "troll farm" although it would be easy to argue otherwise.
Facebook obviously has big problems with internal activists who are trying to convince the company to pursue an ever-spiralling purge against their ideological enemies, and good evidence of that is the unfalsifiable nature of the descriptions of the enemy.
I think the facts on the ground preclude the moral panic angle. Skyrocketing teen depression since 2012, a genocide in Myanmar, ethnic violence in India, a riot/insurrection borne out of fake news on election integrity, woke cancellation mobs empowered by Twitter and the power of wokeness over institutions, large amounts of vax hesitancy. All of these nasty things are circumstantially tied back to social media in one way or another.
I hate how "conspiracy" has been turned into a way to dismiss inconvenient truths.
There is a long and well established history of interference and manipulation of foriegn elections (often by the US). It predates social media and can't be just be blamed on facebook. Pretending this isn't happening is just burying your head the sand.
It isn't a "conspiracy" to think that the most influential elections in the world draws more attention and are more heavily influenced.
The political climate in the UK is pretty savage right now, Brexit pretty much split the country in two. I don’t think this is just a US phenomenon, especially given a lot of what happens in US politics bleeds into the rest of the world (which is why a lot of us follow it so closely).
...or, did tabloid-style "journalism" bleed from the UK to the US as they both reach new heights in fear-mongering and blaming the out-group to get rage-clicks as news organizations' incomes decline?
>The real explanation must lie in other aspects of US culture, or the US political system
First-past-the-post voting leads to a two party system which leads to more polarization. If you have multiple political parties, polarization can only go so far because there will be centrist groups working against polarization. In the US, there is no force pushing for centrism.
But it's also the sort of thing where you don't have to change the whole system all at once in one go. There are now two states that will use RCV for federal elections as well as many more local municipalities (https://en.wikipedia.org/wiki/Ranked-choice_voting_in_the_Un... ).
With any luck, these reforms can stick and hopefully expand— necessary steps if we're ever to end this political duopoly.
>There is absolutely no way either political party will allow a multi-party system to ever exist.
I've suspected for a while that the real secret to getting PR/MMP (=> multiparty) in the US is getting the Catholics on board. They're sick of the Trumpists and libertarians in the Republican Party but they'll never support the vehemently pro-choice Democrats. They should be obvious fans of a multiparty movement, and they have the political and media heft to move the needle.
Alas, I'm not Christian and my views are quite a bit left of theirs, so I don't think they'll listen to me. And most PR advocates have more in common with me than with the modal Catholic American.
That assumes a model where most of the electorate is politically moderate. A sort of "normal distribution" where the mean is centered around centrism. These days, the US electorate is highly polarized and the parties are responding in kind.
A bunch of models I've seen suggest that from 2016 onwards negative polarisation means that it's turnout of each side's base, not independents, that's deciding elections now.
However the resulting wobbling isn't much different in practice outcome-wise so far because turnout tends to depend on level of outrage which depends on how long the other side's been in power, it's had much more significant of an effect on how elections need to be fought.
(much information about this can be found by googling 'negative partisanship' or 'rachel bitecofer' +/- the word 'model', I'm not including specific links because which outlets/articles/etc. will be preferable to a given reader will likely vary so providing a guide to finding a range to select from seems more useful than my trying to select on others' behalf)
They don't have to be in the center, they just have to make the lesser-evil argument. They just need to express to the swing voters that the other party is even more extreme.
I despise first-past-the-post, but it does not force two party systems. Look at the UK and Canada, who have two parties able to form a government, but multiple viable parties.
If you want a culprit, blame the lack of whipping that both US parties do as a matter of political culture. Since you have historically had all these cuckoo crazy subgroups within both parties, it has meant that other parties simply do not have the bandwidth to exist.
However, in the last four years, the Republicans have started to blindly follow their leader as a matter of pride. I see that as the one positive change the Trump era has brought; yes the man to bring it about has been a disgrace, and has used it for vile purposes, but if both parties stick to following their political platforms and leaders far more, the US can end up in a better place.
First-past-the-post voting is the primary cause in the US. It doesn't automatically result in a two party system everywhere if there are other features of the government that work against it. The biggest reason Canada is different is that they have a parliamentary system that elects their Prime Minister while the US has the Electoral College which elects our President.
> The biggest reason Canada is different is that they have a parliamentary system that elects their Prime Minister
Quebec is another big difference–there is nothing really comparable in the US. Spanish is the closest thing the US has to French, but there is no state in which Spanish outranks English–and even if predominantly Spanish-speaking Puerto Rico successfully gains statehood, Puerto Rico would be only a medium-sized state in terms of population, so it will not be able to have the same impact on US national politics that Quebec has on Canadian.
The US has a lot of cultural diversity, but its cultural diversity is spread out thinly, rather than being regionally concentrated – as a result, that cultural diversity can't be reflected in the political sphere in the same way that it is in Canada or the UK or Belgium or Spain, where the distinctive culture of specific regions of the country causes them to develop their own unique party systems.
The big difference is as philistine implies: the two US parties are dramatically more democratic internally, so there is just far less need for people to create new parties to begin with. By world standards the Democrats and Republicans are barely coherent parties at all. For instance they barely have any manifesto, instead presidents have manifestos, but that's meaningless if the party members don't agree hence why shit is always being "blocked by Congress" in the USA when you rarely see that in other countries. The two US parties are more like vague semi-stable alliances of people who don't really agree on much, that happen to march under a shared flag for the sake of convenience and due to FPTP.
Another difference: an outsider who isn't even a party member or who was actually a member of the opposing party, simply cannot run in open primaries in most countries. Yet both Sanders and Trump did this. Nor can they take over the party against the will of the representatives themselves, which is what Trump did. The closest equivalent in Parliamentary systems was Corbyn, which occurred the moment the rules were loosened to allow more open voting for who leads the party, and that nearly destroyed Labour. They have now changed the rules to re-establish the power of the MPs over Party leadership.
The USA will see more parties appear and compete if/when the two big parties develop internal cohesion, for example by insisting that the party leader/president is picked by Congress alone, and in which Congress members who defy the party line are kicked out of that party. Otherwise it's kind of meaningless to try and create a new party in the USA when the existing parties don't actually stand for anything to begin with. It makes far more sense to try and take one of them over.
I think you have the causality backwards. The two parties are so large and varied because the US system pushes us towards two parties. The coalition building happens within the party before the election (usually as part of the primary and convention). Other governments have coalition building that happens post election of the parliament as part of electing the Prime Minister. That means a vote for a minority party isn't throwing the vote away like it usually is in the US.
Well, that could be. It feels like two sides of the same coin though. The USA could use a more Parliamentary approach with tighter parties but more of them.
> For instance they barely have any manifesto, instead presidents have manifestos, but that's meaningless if the party members don't agree hence why shit is always being "blocked by Congress" in the USA when you rarely see that in other countries
I don't think that's really unique to the US. That's an inherent problem with presidential systems–the President and Congress are independently elected, and so they can be controlled by different parties, and even being controlled by the same party is no guarantee they will cooperate–and presidential systems are very common in the Americas–most Latin American countries have the same system. I'd be surprised if Latin American countries don't have some of these same problems with "political gridlock" that the US does.
This will sound dramatic, but could it be because the US is the most important market for both advertising and political disinformation?
Regarding advertising, for companies like Facebook, they may have billions of DAU but still derive the majority of their revenue from rich countries like the US. In 2020 the US and Canada were 45% of Facebook's revenue according to this random website I found: https://statstic.com/facebook-revenue-by-geography/ That's way more than the other regions in terms of $/user, so Facebook is a lot more incentivized to over-optimize for engagement based on US users. Depending on how much the algorithmic feed changes from country to country, it's possible that other countries experience less polarization simply because their culture is weighted less in training the feed's engagement optimization algorithms.
Regarding disinformation and political destablization, most countries simply aren't relevant enough on the world stage to be worth investing money in targetting in this area. The US, China, Russia, UK, Germany, France, Japan, India are all probably relevant enough. China and Russia effectively don't use facebook and are the most obvious non-US-aligned bad actors. They would also get way more bang-for-their-buck targeting the US than other countries. Note, I don't think is as widespread of a problem as many people think it is, but I bring it up because it's relevant in the context of political polarisation since there is strong evidence that it has occurred at least in the 2016 election: https://www.theguardian.com/technology/2017/oct/30/facebook-....
Actually, the two combined can be scary. If you can use outrage (great for engagement) to drive engagement in your content which is designed to politically destabilize the US, you can get a huge reach. This is effectively what you see a lot of the time in highly-engaged US content on facebook anyway: politically inclined outrage.
It's the US legacy media that is the main reason for the polarisation - the two party system is a factor, but the regular media is the one pitting "us vs. them" in every single minute of every broadcast.
Social media has downsides, but to lay the blame firmly at the feet of Facebook is to willfully ignore the culpability of CNN, Fox, the NYTimes, Washington Post, and many other legacy media outlets that are making tons of money otherizing the part of the country that is not their readership.
The premise of this argument is false. In Germany, where I live, polarization has increased dramatically, especially since COVID. I don't know about Facebook, but the tone on Twitter is harsh, the hate is palpable. The difference is maybe that unlike in the States it's not a 50:50 split, but maybe 80:20.
NBER's study [0] found (West) Germany had the biggest decline in political polarisation over the 1980-2020 period of all the countries they included in their study (12 OECD countries).
Doesn't entirely contradict your position, given that it was specifically measuring polarisation in terms of attitudes towards political parties, and so may not be good at measuring forms of polarisation that do not map straightforwardly to political parties; and looking at long-term trends over 40 years doesn't tell us much about how people have reacted to something which has only happened in the last 18-24 months.
The study failed to find any statistically significant correlation between political polarisation and proxies of social media use (Internet penetration and online news consumption; the study authors did not have data on social media use itself)
I'd also like to see references on which these countries with declining polarization and high FB/IG-usage would be. It's not a trivial or uncontroversial thing to quantify but hey, MZ brought it up.
It’s interesting that you don’t even agree with the fact that Zuckerberg claims. That polarization is declining or flat outside the US. Your claims that it’s less than in the US, but still increasing outside the US is a lot more aligned with reality.
But the fact is that this isn’t the defense Zuckerberg thinks it is. In fact, it may even suggest the absolute opposite.
Facebook has never been as popular outside the US as it has within it. The best indication of this fact is FB’s $19Bn purchase of WhatsApp which was largely driven by the fact that FB Messenger was basically only popular within the US, with the rest of the world preferring WhatsApp, which was also an indication of how FB’s network was simply not as entrenched outside the US as it was within it.
The more likely reason, however, is that assuming FB or social media in general increase polarization, it would almost certainly worsen it in the US more than anywhere else because of the US’s fairly unique 2 party structure combined with the primary system, both of which would almost certainly exacerbate any polarization effects caused by an external factor.
According to one researcher (https://www.brown.edu/news/2020-01-21/polarization) polarization is increasing as parties become more closely aligned to ideologies (eg. religion, race). Looking across the aisle, your opponents looks more different then they did a decade ago.
Why? My theory is that data mining and software is identifying and targeting seams of ideology that are most readily influenced, so in effect campaigning efforts are efficiently widening the divide between parties. Social media just happens to be the choice source of this data, as well as the medium to influence.
This is a good observation, but I don't think it absolves social media. There are likely many factors increasing polarization, and social media may amplify those factors and/or turn otherwise harmless factors into harmful ones.
The core idea behind social media is that it amplifies various voices, instead of those few who's job it is to participate in the media. In the US, this has effectively turned our right to freedom of speech into a right of freedom to broadcast.
The total size, wealth, and political influence of the US are much larger than the most culturally comparable nations. The value for those within and without the nation to spend effort using social media to influence discourse and election results is high, and is likely at least a partial factor.
That's worded to tip toe around the truth which is that in other countries FB is increasing polarization, that FB knew about it, and tried to ignore it - from the third^ file:
>In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.
>“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.
>Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds.
The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.
>“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”
>In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a digital risk protection firm.
Facebook researchers wrote in their internal report that they heard similar complaints from parties in Taiwan and India.
Add to that the revelations from yesterday's^^ hearing regarding FB's role in violence in Myanmar and Ethiopia plus repression in PRC and Iran and there is no other interpretation than Mr Zuckerberg is lying.
Is it true that polarization has not increased in other countries? Is there published data on it? Would be interesting to see which countries are experiencing historically high levels of polarization.
Their main results are on the page numbered 20 (page 21 of the PDF) – their data shows political polarisation has grown (over the period 1980-2020) in six OECD countries and declined in six OECD countries. The six countries where it is has grown (in order from greatest to smallest growth) are the US, Switzerland, France, Denmark, Canada, and New Zealand. The six countries where it has declined (in order from greatest to smallest decline) are (West) Germany, Sweden, Norway, Britain, Australia and Japan.
They don't have any direct measures of social media use in their source data; the closest things they have are Internet penetration and consumption of online news, but they found no statistically significant correlation between those and polarisation. The only clearly statistically significant correlation they could find was a positive correlation between polarisation of societal elites and that of the general population (p=0.011). They also found a positive correlation between increasing racial diversity and political polarisation which was of borderline statistical significance (p=0.052).
Thanks, I'd also like to point-out that 40 years is a long time in relation to the more recent FB algo changes that reordered timeline in order to increase engagement.
What gives you the impression that countries besides the US are not becoming more polarized? My experience in Europe and South America make me think it's getting worse everywhere.
Obviously it depends on the actors on social media what level on harm social media can cause. If there weren't political forces, foreign governments and troll media such as Fox and friends then social media could be much less harmful.
It requires considerable expertise and resources to spin up a disinformation campaigns continuously. The US leads here because of the size of its market and determination of its adversaries.
Mark tries to spin the responsibility away in the quoted sentence.
The fact that people such as Peter Turchin predicted a peak in US political polarization starting in around 2020 (a prediction made in a published paper in an academic journal in 2010) shows that surely FB is not the immediate cause. But that's kind of like how gasoline doesn't start fires. It doesn't, but it accelerates them, and makes the consequences worse.
Given the late 1800s and early 1900s US were also very polarized, part of me wonders if the post-Cold-War world is returning to some sort of polarized state. It can feel like there are a lot of different perspectives on the way things should be, and thus there's less certainty where to go.
That said, I'm not a historian so take my shower thoughts with a teaspoon of salt.
One possible explanation is that Facebook in US is subject to more adversarial actors than other countries.
A propagandist-for-hire choosing to astroturf, conduct false flag campaigns, form vote brigades, etc. is more likely to target the US political market because -- globally speaking -- there's more political power at stake.
Whether FB can claim that as an excuse is another question.
How would you categorize FB/whatsapp-driven genocide in Myanmar[0], polarization(+) or polarization-lite?
The semantic acrobatics in statements like this drive the discussion into pedantic details that successfully lose sight that on net, this is a cancerous product, with a pathology unique to each culture it touches.
like, there's been leaked FB PM discussions on this very topic and phrasing it like a product challenge versus the moral calamity that it is.
Or it could be the enemies of democracy can leverage social media for division and the enemies of authoritarianism can’t. And some countries don’t have as big of targets on their backs as the US.
Because the rest of the world is behind the curve, the US is merely leading the charge as its established political landscape perfectly plays into the filter bubble polarization FB feeds on.
Something that's quite observable in places like Germany: Election campaigns in Germany used to be quite boring, there used to be no such thing as "political attack ads", parties kept their campaigning to topics they stand for, instead of trying to attack other parties or candidates on their particular positions.
At least that's how it used to be for the longest time, but during the last decade the whole tone around German elections became noticeably more hostile, something that directly correlates with the rise in popularity of the AfD.
Said AfD has hired American Harris Media for their campaign strategy, the same company that won Trump his presidency [0]. They not only introduced the wonders of micro-targeting, which already contributed to the Obama presidencies, but also added their American flavor of running political campaigns with these extremely hostile overtones.
The latest highlight of that escalation, during the recent election, has been far-right parties literally calling to "hang the Greens" [1]
Or to give another, often overlooked, example; The US wasn't the only country that saw their capitol stormed in recent times, an attempt was also made in Germany, but there police actually stood their ground [2]
And while most of the big established parties condemned what happened there, the rising far-right ones did nothing like that, they were right there riling people up.
You don't need to go anywhere else to find polarization. Right here on HN, conservative points are downvoted, flagged and shunned despite of having excellent credibility, just a different perspective of core values. This was not the case, note the number of downvoted comments in this highly debated topic in 2016, they were very few: https://news.ycombinator.com/item?id=12926678
It's one of those things that says a lot about the people on HN and their tolerance to wide range of opinions. And yes, we all agree that rude, disrespectful and violence has no place on HN. But, disagreeing on Tax policies? Property laws? Patents? Let people talk about it.
I really don't see it different than anti-liberal hostility on conservative forums. It's the same. Exactly the same.
Let's change this and exercise some restraint. Most people on either fence have good faith.
You need at least a 3rd viable political party to fix this. A two party system is polarized on issues. Democrats believe conservatives are homophobic, racist womanizers and conservatives believe liberals are communist, devil-worshipping pedos. It doesn't matter if either of them have facts the other party is so opposed they won't listen.
The point is that moderate comments that are slightly challenging mass ideology are downvoted on HN. That clearly and factually didn’t used to be the case as I’ve shown in 2016.
They are downvoted for the reason I just stated. Why would you upvote someone you believe is opposed to your entire existence based on what political party they associated with?
This just doesn’t resonate with my experience on HN. Ad-hominems are aggressively downvoted, and bad-faith arguments usually are too, even in political threads. There’s clearly something being valued higher than party affiliation by HN’ers else that wouldn’t be the case.
That isn't the issue. The issue is that good-faith arguments are downvoted as well. If you are Republican, there are certain ideas that will instantly get downvoted simply because of your political affiliation. You could say, the most abundant element in the atmosphere is Nitrogen as a Republican and you'd get a bunch of downvotes and people arguing just cause they think all Republicans are anti-vax idiots.
Same with Democrats depending on where you are and what you're commenting on. People are polarized to the point that they no longer care what you have to say. If they think you're with the other political party it is instant downvote.
With the caveat that I don’t know much about Facebook’s research projects… the question “If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” sets alarm bells ringing for me—because I can imagine a cigarette company saying the same thing about the scientists they hired to argue that smoking isn’t so bad after all, or an oil company saying the same thing about the scientists they hired to argue that it’s too soon to act on climate change (especially since they’re “leading the charge” on renewable energy research).
(I’m influenced here by Naomi Oreskes and Erik M. Conway’s work in “Merchants of Doubt.”)
I think that the point here is that all the controversy is because of a study from Facebook's own research. If a cigarette company came out with a study showing cigarettes were dangerous, perhaps it would be a different story.
Of course, I wouldn't say that being aware of the problems your product causes makes you less culpable; just the opposite.
At the same time, I think that when it comes to assessing whether the company is acting in good faith or not, it is important to consider not just what research they do or don’t do, but also which findings they choose to promote and which they choose to keep internal…
(Part of the fiasco with tobacco or climate is that those companies knew the risks associated with their products well enough, but didn’t publicize the findings that might cast them in a bad light in the same way that they publicized the findings that supported a “let’s wait and see” approach to policy. See for example https://www.scientificamerican.com/article/exxon-knew-about-...)
Sure, but if a cigarette company commission a whole load of research, sat on the "bad" results and publicized the "good" ones, and then a whistleblower came forwards and showed the world the bad ones they had sat on...
Google handled this in the right way: fire researchers who are clearly pursuing anti-company agendas, always. It's worse to pretend to support research (or "research", which TBH is what most of the anti-AI shite at Google was, bad actors working with external groups to "take down" big tech, funnel docs to DSA, etc) that hurts you than it is to just stop pretending you care what a bunch of hard-left extremist assholes think.
Facebook has a tough problem, though, because a lot of their criticism is not just one wing of loonies, it's from every direction including normal sane people, and a lot of them agree with each other for once. I don't think they're breaking any laws, but I don't doubt someone will invent a shady case or charge them under a new law just because everyone hates Zuck. Not sure how you dodge that.
he can step down from active management, he doesn't really need to run it anyway. FB is only big tech company that is still founder run (largely because it's younger ) .
They could Split /diversify /dilute the brand /products the way google has done making it harder to associate with the bad things directly.
> If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
I mean...precisely to be able to control that research. Exactly like companies in other sectors (tobacco|oil|pharma|AI). It's asinine that this is presented as a rhetorical question [1].
For those (myself, as well) comparing portions of this response to tobacco, oil, and pharma companies controlling (and manipulating/suppressing) research, an example to keep in mind that's much closer to home would be Google censuring negative AI research [2].
[1] See for example, The Influence of Industry Sponsorship on the Research Agenda: A Scoping Review
> If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you.
Someone on HN actually linked to the slides, and it does seem like there's a lot of FUD being created.
This reminds me of moxie talking about damaging research focusing on Signal:
> To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.
Three minutes of my life I'm never going to get back. You can stop reading by paragraph two at "We care deeply about issues like safety, well-being and mental health." All Facebook cares about is engagement.
Yup. I once saw an accountant write, "Don't tell me your priorities. Show me your budget and I'll tell you your priorities."
Facebook's whole history is not really caring about those things. What they care about is not looking bad. Which is why they trot out those lines when things get too awful for the public, the press, and legislators to ignore.
"We are so, so sorry you caught us {doing, allowing} this thing. We are deeply embarrassed that we didn't hide it well enough from you. We promise to take the time to do the work so that in the future you won't discover us still doing it. We deeply value {word spew of the month} and look forward to you believing our apology well enough to continue using our service."
And they're really good at it! If somebody only sees one, or if somebody sees them so infrequently that they forget in the meantime, it's very plausible. Which is of course their purpose. And Facebook itself is practically a machine for making people forget about long-term patterns.
An interesting thing in these cliche apologies to notice is also the fact that they adjust their "number one priority" depending on the context.
Every single time a business gets hacked because they neglected security for their whole existence they say "our customer's security is our number one priority". Then they completely ignore security again for the next few years until the next time they get hacked and they make security their number one priority for an hour.
Yup! One of the things I often tell executives in prioritization meetings is that "priority" is from "prior" meaning "comes before". If everything is a priority, nothing is a priority. So I force them to order things linearly.
It generally breaks their brains the first time, but they quickly adapt.
I have NO doubt that if you asked Mark to push a button in which all the engagement were healthy, he'd do it after being assured the engagement amount wouldn't change.
He probably does care about safety, well-being and mental health" (though not about serial commas apparently). He cares about them in the same way that the vaping companies care about lung health (they exist to reduce cigarette smoking, right?)
Same sentiment after reading this. He is basically saying "we have industry leading research team so even if we don't make decisions align to their recommendations, we care deeply about their work?"
I handled Mark's rambling post like many others similar to it, I reported it as 'false news - politics' (in this case^) and then sent a message to the author politely detailing why.
^ After niceties I started with a quote from his post which was false and the the explanation from reliable news sources of why it was 'false news - politics.'
Unfortunately most of us unknowingly are giving away our time, attention and data to the likes of Facebook, etc. and don’t know what it’s costing us. Join the club and delete, take away their power! You can socialize and share just as easily without these jokers.
I did think the fact that his "we're not actually pure amoral engagement maximisers" article was gated behind first signing up and logging in was just perfect under the circumstances.
Honestly seemed to be quite sufficient of a summary of whatever the text was, to a great extent.
I’m with you. Facebook brings way more good to the world than bad. And the bad it brings is just amplification of what already exists. And with the Internet that amplification will happen with or without facebook.
Also the fact that much of the research / “leaks” are showing most of the issues are only happening in the US is a sort of a glaring hole in most of the arguments being made against FB. The US is just borked at the moment. Everything is red vs blue, and no one even cares what the issues are at this point.
Most people completely have their head in the sand about how utterly awful the average person is. A massive chunk of what people complain about is simply attributable to the fact that people are monsters, and that global connectivity means we can now all talk to each other without distribution being tightly bottlenecked and controlled (eg publishers, broadcasters, radio, etc). There's little that Facebook is accused of that seems as salient as this fact, and little they could do that would avert much of what we're seeing.
Oddly enough, I've felt for a long time that Facebook is among the most unethical companies out there. It's very weird to see it facing a potential reckoning on the back of such incredibly weak claims.
Absolutely. But one of the things Facebook does, is to favor showing those bad things to others because it drives engagements. The feed is just a steady stream of the same low-effort, high-emotion stuff.
The problems with Facebook are not (all) unique to Facebook, and they are not single-faceted. Another problem is that they do their hardest and ugliest to inject themselves everywhere they can, to scoop up emotions and events to get a fuller view of people, so they can sell ads.
I quit Facebook 2012, block anything fb in my browsers, don't use their messenger etc. I just hope this'll work when my daughter is old enough to want a phone too.
Facebook tried to go with "we're not bad, the people are" a while back. Yeah people are bad, but if you provide the tools for them to organize and further radicalize, you are responsible for what comes next.
Do we take Facebook away? The same problem will appear again in the next big social media network.
I do not know why people try going after facebook with pitchforks, when it is very much a people problem. One way to solve this would be to take anonymity of the internet away completely, but then people would
Facebook is just a platform, it isn't like there are a bunch of evil people sitting on a computer finding the most vile pieces of content and showing it to you.
It is what people seek, facebook just facilitates it. I am sure if google was to do similar studies on what people search for and how it affects them, it will see similar results.
Turn down the volume. Facebook specially takes "high engagement" content that is politically charged and full of hate and prioritizes it for distribution. They have shown they can deemphasize harmful content around elections, but when the heat is off, they go back to the old way to maximize time spent on platform / ad revenue.
In fact, remove the news feed altogether. It has nothing to do with "connecting the world", it exists only to hook people on low quality / high engagement content and the source of most (but not all) of their problems.
I should be clear. I don't think that there are no levers that Facebook can or should pull here. But that framing contradicts the breathless hysteria and shoddy statistics behind the push that Facebook is _causing_ damage, as opposed to being in a position to uniquely reduce harm in a way that (eg) contemporary radio manufacturers when radio was being used to foment the Rwandan genocide.
That is to say, these upheavals are inherent to global connectivity and empowering the voices of the masses; the obsession with Facebook as an entity is down intentional blinders about this fact.
Although, I should note that I strongly disagree with your strong form that "providing the tools for people to organize" means you're responsible for everything that happens. I don't see people rushing to give Facebook the same share of credit for Black Lives Matter, or the successes of the gay rights movement in the last decade, or any other outcome they consider positive that relied on "the tools provided by Facebook to organize". Just as Marconi or contemporary radio manufacturers aren't to be blamed for radio's role in fomenting the Rwandan genocide. As I said, people are monsters, and the idea that their communications tools have inherent responsibility is as nonsensical as saying that airlines or car manufacturers are responsible for allowing the crowd to get to the Capitol on Jan 6.
Well to start, Facebook is neither good or nor bad - it is not a person and does not have morality. The human mind is not really capable of dealing with an abstract entity such as a corporation so we anthropomorphize the large mess of people, software, and hardware presenting us a little app on our phones as imbued with a personality and morality. Then we debate whether this mess, which has Zuch's vague face over it, is a personification of good or evil. It's neither - it is a morality-free phenomenon.
Facebook's properties have benefits for certain slices of the population in certain instances, and negative effects for other parts of the population. Businesses rely on Facebook's properties for communication, people rely on Messenger and especially WhatApp for communication, artists rely on Instagram for inspiration and social outreach, teens use the properties to connect with each other and to feel sad or happy about themselves in social contexts (as teens tend to anyway). In some cases the teen are happier, in others sadder.
I got to know my gf over Messenger, I have wasted tons of time on Facebook, Instagram has been a medium influence on me. Life happens, people happen, and now social media happens.
Zuckerberg is the public face of the organization now and it is his life's work. He has an entire society's worth of social media interactions he is tasked with controlling, censoring, and maintaining. People tell him they want it free for some speech, closed for other speech, they want Facebook to hire tons of people to censor and curate the content, they want great decisionmaking, they want to be be free to say anything they want to say (so long as it matches their specific moral values), they want their kids to only see good things online, they want their kids to find what they need but see only what they should see on all of Facebook's multifaceted properties. It is a huge task.
Why would Zuckerberg intentionally want to fail at this task and have people hate his company? To make more money? He is not an idiot and knows that the success of the company is dependent on its reputation. How exactly could he NOT do his utmost best to mitigate the negative effects of his properties, enhance the positive effects and thereby positively influence everyone - the users, the company, himself? I don't understand the cynicism which drives all these comments either
Your argument is the same as “Guns don’t kill people. People kill people”. It’s still something with potential to cause a great deal of harm and should be regulated.
> Why would Zuckerberg intentionally want to fail at this task and have people hate his company?
Nobody wants to fail at anything. Doesn’t stop people from failing all the time. Nobody wants to get fired but if you suck at your job you get fired all the same.
You’re approaching this way too logically without considering the social aspects. Stop trying to think like a robot.
> Your argument is the same as “Guns don’t kill people. People kill people”.
Sort of, the argument is more like "Computers don't kill people. People with computers kill people by starting revolutions, committing online crimes, etc. Also, people with computers do lots of non lethal and helpful stuff as well too."
Facebook and mass social media are like that computer. Lots of benefits, lots of detriments as well. Doesn't mean you should cancel the computer.
It's pretty standard that when you sell your company to someone for vast sums of money, they get to control it. Their vision and goals now win, because it's now their company.
I get why people hate Facebook, but to me Facebook is no worse (or better) than what a top social network service would be, in this format (a timeline that you can post anything) at least.
If FB shuts down today there would be another one that is occupying the exactly same space, having the same influence on the society soon.
They do a lot of good bringing people together, but it’s a double edged sword with massive potential for abuse. It’s not up for debate that Facebook causes harm. The only question is how much is too much? At what point do we step in, tell them their mitigations are ineffective, and break up or regulate their systems?
The question is how many alternative systems and services people could use to keep touch with their loved ones Facebook steamrolledby network effect and questionable business practices.
Services that might not require you to expose your real identity to the whole internet, that might not exfiltrate your full contact list from your phone or would not apply questionable morals and censorship on the content you exchange with your loved ones.
My parents ans I live on different continents and we still manage to speak regularly, share pictures, etc. All of that without any of us having a Facebook account.
All of which have enabled many abuses such as financial fraud, terrorists being able to communicate, pedophiles sharing pictures, etc. Ban phones and email. They're too dangerous.
I can manage fine by seeing friends and family in person. If you can't, maybe you just don't care enough?
A messenger app brings people together. Such an app doesn't need an algorithmic feed to hook people on junk/harmful content. That was added for one reason, money.
> In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse.
All the reporting has been about how the research found that Instagram was so terrible for teenage girls, but that seems to be a total mischaracterization. Honestly, it seems like if you ask teenage girls about anything (clothing stores, schools, television) there's going to be a mix of positive and negative experiences. Is the bar we are holding facebook to that no matter how much good they do that any negative experiences outweigh that? Is that a bar we would hold anything else to?
Is there a link to the actual findings of the study? I feel like that statement is cherry picking, and without context there's not much weight to it.
The ultimate issue here (unless I'm misunderstanding the controversy) is about whether Facebook decided to act on the findings of the study which showed Facebook/Instagram was causing harm to teenagers. This sentence from Zuckerberg seems to be disputing the findings of the study, and implying that everything everyone is saying is wrong and/or a lie (or a "mischaracterization" as you called it)
Zuckerberg has zero credibility in my eyes, so I am inclined to call bullshit on that. But if the actual original study is out there, I might skim through it to see if this is just another one of his lies.
We need to see the actual data to conclude either way. It's so easy to make a statement like that which sounds compelling but conveniently hides some nastiness somewhere else. Just as it is so easy for the whistleblower to cherry pick a stat to make her point. Neither of them are lying but their framing may have some level of dishonesty which is obscured from us since we can't look at the bigger picture. We have no idea how these questions were asked, the methodology, etc.
Anyway, there's lots of research on social media done in universities, so we don't need to take their word for it.
> According to internal studies retrieved by Haugen, Facebook found that 13.5% of teen girls say Instagram makes thoughts of suicide worse, and 17% of teen girls say Instagram makes eating disorders worse.
This is a snippet of the research mark's referring to. Oh good, only 13.5% of girls feeling more suicidal.
That's a high bar. Let's shut down malls, competitive sports, grades in schools, hell schools themselves, teen magazines, television, arcades, even suicide hotlines, etc because they all made at least one person feel more suicidal.
And then you could say, well, maybe some of those places didn't do the research. In which case, isn't that worse? If they are making people more suicidal and they don't even care enough to research and find out, how are they possibly going to get better? I would much rather an institution research the harms (and benefits) that it may be causing than to just turn a blind eye.
While we're at it, we should start tearing down any large or particularly beautiful bridges and condemning their architects and engineers, there's a ton of researching showing how those things increase suicides.
While it did make it (suicidal thoughts, aiui) worse for 13.5%, it made it better for 38% (see https://s.wsj.net/public/resources/documents/mental-health-f... slide 10). Is that better than schools? Is that better than television? Is it better than malls? Is that even good overall or bad overall? How does it compare to competitive sports? What about grades in school?
I mean, we could take the position that if any of these cause any teen girls (or boys) to be more suicidal we should condemn that thing and rid ourselves of it, but I think that would be a mistake.
> if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
I have a kind of unpopular belief that we'd still have similar polarization if misinfo were just distributed on Fox + MSNBC, websites, and Twitter.
I think Facebook is “that bad”, but one point I thought was believable was about major advertisers not wanting to be associated with hate mongering posts in the first place.
I’m prone to believe that, but also not sure that is has anything to do with the other issues at hand.
I'm not convinced that this is a real concern for most advertisers. Facebook and its properties are so big that they're a potential revenue source you can't ignore. Also, it's different than your ads programmatically appearing on some extremist website for the world to openly make the association. Everyone lives in their own little bubble, and they know the ads have little to do with the content they're seeing, and more with what they engage with or are more likely to buy.
I have never signed up for Facebook but if my family members have ever "uploaded their phone contacts to find more Facebook Friends" or if my relatives have ever uploaded pictures with me in the picture ((Facebook has detected a face we don't recognize. Please tag the person in this picture)), or if I have a US Government Name and/or Social Security Number, there's a company that is building a DATABASE DOSSIER with information about me. I cannot ask them to please DELETE information they have about me because those Shadow Profiles are SECRET, I'm not even supposed to KNOW that these creepy data-scrapers have a file on me.
Facebook is just the front end for the NSA / surveillance state backend, and at this point it's frightening how Too Big to Fail they have become.
When you're designing a Big Brother dystopia in which everyone is tracked and surveilled, step one is to create something like Facebook. "All your friends are doing it! You should too!"
No, not the only one. I don't like the recent narrative that Facebook is an extremely evil corporation and responsible for most issues in the world. I think people overestimate the influence of Facebook and just want to live in a world where everything they don't agree with is fake news, controlled indirectly by Russia and spread through "useful idiots" (usually defined as people from the opposite side of political spectrum, because my political spectrum only consists of highly intelligent, emphatic people who analyze every news article in detail).
I suspect you, like I, have either minimal FB use or have healthy guard rails for our use and expectations.
A non-trivial portion of the country doesn't have that mindset. Further, many in that group lack the understanding of FB's ability to influence wide-spread opinions.
Different groups get mad at Facebook for different reasons, so inevitably they get a lot of hate.
In the context of the whistleblower:
One of the popular methods of manipulation is to use "harmful to children" as a basis for making an argument. We have seen this countless times in the past on a variety of issues. This is no different. The "harms" that are being highlighted here are equally, if not more, applicable to adults. Children are at a stage in their lives where good parenting can easily offset any potential harm by consuming content on Instagram or Facebook.
One can recognize that this particular topic, like many contemporary topics, is a subset of the overarching libertarianism versus authoritarianism debate, and opinions often cleanly fall on political lines depending on the complaint. In this case the whistleblower has left-of-center politics, so they have a grievance with "disinformation" and "not enough control". There have been previous whistleblowers who have had right-of-center politics, who have cited "censorship" and "biased control" as their grievance. There is ample evidence for the company being guilty of both, with regard to specific instances.
As such there will always be complaints from opposing points of view as to whether the company is doing "enough" to police content, or whether the policing has become biased. Amusingly, you see the reverse of this debate when you look at actual policing in the USA, where the opposite side argues bias in policing and the other side argues for harsher control and punishment.
Those who fall on either side of the spectrum tend to paint with a broad brush some kind of systemic evil conspiratorial agenda at the company, as a consequence of voicing their respective frustrations.
Overall Facebook is a net positive for the world. There are likely activists within the company trying to push agendas, some of whom may be prevailing over others. This is evident by just taking a walk around campus and reading the political messaging that adorns the various shared spaces. These are also largely irrelevant in the long term because if and when it reaches any sort of extreme, eventually the pendulum will swing too far.
Good riddance. Like any tech company, the topic has nuance. FB provides utility, no doubt about it. It's also a massive double-edged sword with nasty trade-offs.
What is finally coming to light through this is the anger that's existed forever with individuals who want to opt out, but can't. Individuals who do not want this moralizing, tone-deaf inevitability projected by Zuck/Sandberg/supporters. Every parent that's had to fight their PTA, citizen that's fought their community group, or parent even countering own parents about wanting to participate in those critical aspects of civic life, but not doing it on Facebook - "no, you can't post a picture of my kid on there." This is putting aside the Q, the DJT, the absurd Hurricane Maria VR tour Zuck took, the "Lean In" paired with insta body shaming....the everything of the last 4+ years.
FB is not some corner of the internet, it's not like privacy/encryption/crypto types you can sort of ignore and leave it at that. It wormed its way into so many aspects of life, and the only way to live a "normal" life and avoid it is to be sort of an abrasive a-hole to people you love. I'm so glad this anger is finally getting a voice. If this turns south, the company has what's coming to it.
For me, it's the almost complete lack of guilt or awareness that his company has caused pain and even death. Yes, FB and its acquisitions have brought many many good things to my life and to the lives of others. They have also brought a lot of pain to me and to others as well. No service only provides good. He doesn't seem to say this.
Instead of him trying to remind us "we do good things" I wish he would also say "and with any platform so large, we are going to make decisions that will hurt people. Sometimes we get so wrapped up in our culture that we may not be aware of the pain we're causing to others and, while the research can help, sometimes we still miss it. Judging by the backlash of many people here, while many feel very grateful for FB, some may feel a lot of pain when thinking of our services and we want to do better and we need your help."
Something that expresses some hint of awareness that many people are being hurt and a desire to take the lead to try to fix it.
For me there's a useful distinction between guilt and shame. Guilt is where you fall short of your own standards and seek to rectify it. Shame is where you feel bad about falling short in the eyes of others.
Large corporations often perform shame, but they rarely behave as if key actors experience guilt. So they'll make reforms only as long as there's significant pushback.
If Zuckerberg actually cared, he would be writing letters like this even when the heat is off. He and they would have sincerely worked to fix these problems from early on.
I really appreciate this distinction. I've noticed that when I feel ashamed, I tend to hide from someone, whereas when I feel guilty I tend to feel driven to open up to apologize to them.
Does anyone else remember when facebook got rid of the friends only feed ~2014/2015? That's when I stopped using facebook so I'm not sure if it's been brought back since then, but I'm assuming they haven't.
To me getting rid of that feed is in direct conflict with "Meaningful Social Interactions".
I'm not sure if anyone else feels the same, but a lot of this post feels very gaslighty to me.
Agreed. That was a moment for me when I decided to remove the Facebook app and eventually delete my account. A social network that forces you to look at garbage other than friends’ posts is not really a “social network”.
While I would agree the truth lies somewhere between pure evil and evil as a side effect[1] the comments Mark makes don't give me any indication that he appreciates that if someone thinks you're evil, they aren't wrong in their thinking, as CEO it is your job to figure out how they got there and see how many other people think that way.
So whether or not Facebook is actually evil, this perception appears to be wide spread enough to result in material harm to the company through overt regulation of their actions and choices.
Mark's job includes "owning the problem." It is not his job to argue persuasively that it isn't really a problem.
While it was perhaps not as effective as Bill Gates would like, the "We have to fix security" memo was a response to Bill owning the problem of the perception of security of Windows. It actually acknowledges the perception, and identifies what is going to be done to change that perception. Mark would do well to study that event in history as this seems like a repeat.
[1] Note that I accept that you can have really good things in the middle of a bunch of evil things, so for me (and I accept that this may be a unique and nonsensical statement to some) there is a balance of evil things and good things and if the evil is 50.1% then your a little bit evil with a lot of non-evil stuff. And if you're 80% evil you're mostly evil with a bit of non-evil stuff.
Gonna be honest. This isn't a level playing field.
It's not like Zuckerberg sat there and wrote this. They have the money to find the best psychologists etc etc to write a response in such a way that resonates more with people vs a whistleblower that's taking on goliath, is stressed, etc.
What really should happen is calling a few random employees based on an organizational chart in and making them testify under oath what they're doing. No NDA, no right to fire the employees afterwards for at least a few years.
Well for all the money & resources they have, it's a pretty tone deaf message. No remorse or even the slightest admission that they could be doing better. Pretty much asking everyone to not believe what we've all been experiencing for the last few years.
I don't remember him communicating with much remorse ever since Facebook started. His communication style seems very similar to how I think it's been over the years.
This self-proclaimed adjective was thrown in the mix five separate times within this note in an effort to explain how great the work that FB delivers is (and likely, to imply that the negativity applied to FB in the public eye is unfair).
Saying Facebook is "industry-leading" with respect to social media and technology is like saying Priscilla is your favorite wife, dude.
Put your money where your mouth is, or you'll likely continue to lose it as various outlets reported this week.
I think you're missing some key nuance. Zuck mentions the research as industry leading because the research has uncovered something deeply embarrassing for Facebook.
It's relevant that this is "industry leading" because no one else has done this work. The implication is that the same patterns at Facebook are likely true with other forms of media.
Other researchers don’t have free access to the same data, so their own research is industry leading by default :) Anyway, the term is intended to be used as PR to appeal to the media, I assume, than to be really accurate.
I'm not trying to defend Mark here, but it does bother me slightly that people continuously bring up that really old quote of his from many years ago. People change. I've said some disagreeable things on chat many years ago as well. Maybe as a joke, maybe I was frustrated, but I do regret them and I've since grown as a person. I would hate it if someone kept rehashing something I said ages ago to further their narrative of me. Would it not make sense to judge a person by what they're doing now rather than what they did in college? What about you? Have you never made dumb comments that you really wish you hadn't? What if those kept getting brought up against you now rather than who you've become since then? Just a thought.
A Facebook login is required to read an explanation to the public? I get a login/password screen... There is probably a good quip there about the psychology involved in placing a response on a private forum, but the effect for Facebook is I will not see their response.
Yesterday free speech was so important that platforms must syndicate all topics equally and fairly. Today these platforms are so extremely dangerous to society that they should be shut down entirely.
I’m not a fb apologist by any means, but maybe we should stop blaming these websites for everything and invest in education instead.
> I’m not a fb apologist by any means, but maybe we should stop blaming these websites for everything and invest in education instead.
Even educated people are susceptible to content that is specifically designed to incite outrage and divide us. Facebook (and other sites) optimize for that type of content because it maximizes engagement. Teenage girls developing body image issues also apparently maximizes engagement.
If there's any framing here that absolves Facebook of culpability, it's that everyone else is doing it too, and that there are no laws/regulations in place to prevent it... But even then, it's still obviously immoral to anyone paying attention, and the employees responsible for it at Facebook have no excuse for not realizing that. They chose to act immorally.
I appreciate both you pointing this out AND the dunks on Zuck both here and on his post at FB. I'm surprised they didn't set up a custom algorithm to bury the most critical responses at his post.
Anyway, here's an attempt at some thoughtful nuance:
There's lots of wisdom readily available out there that describes how to most responsibly deal with critical feedback. For example Rapoport's Rules (via Dan Dennett). First you express the other's position so they can say "yes, that's exactly how I feel". Then, you describe where you agree and what you've learned. Finally, you can go into where you disagree.
Another framing is to ask "what is true in the feedback?" with the presumption that there is truth to be found.
This and other approaches are all about maximizing what to actually learn from feedback.
By contrast, Mark's post is a textbook-worthy demonstration of how to be politely defensive, even being candid about your emotional vulnerabilities, recognizing your frustrations, not lashing out in anger… all while admitting absolutely ZERO about anything in the criticism having any merit or opening any questions to really grapple with. At its best, his letter says "I'm really saddened about how my work is misunderstood, and I don't want to blame the critics, but I'll work to get past my reactive feelings and go back to continuing my wonderful work that has been so unfairly maligned".
I'll grant that this is more mature than a tantrum that attacks the critics and spreads rumors and lies about them. Zuckerberg is a more thoughtful and nicer person than Donald Trump.
> I'll grant that this is more mature than a tantrum that attacks the critics and spreads rumors and lies about them. Zuckerberg is a more thoughtful and nicer person than Donald Trump.
I wouldn't conclude anything about Zuckerberg's character from this letter. There is absolutely no way he wrote it. I'm sure he signed off on it, but he didn't write it. Some PR crew filled with lawyers and communication specialists wrote it.
Maybe, but he's in charge. If he cared about the concept of learning from feedback and trying to find out what is true from criticism, that sort of approach would be part of the whole FB culture and would be part of how speech-writers write things.
Regardless, being nicer and more thoughtful than Trump is such a low bar it's meaningless. I could believe Zuck to be that way and also believe that Zuck is a fundamentally clueless sociopathic moral monster. I don't really know of course. I suspect there's at least some complexity here.
But it's not just this letter, everything I've ever heard from Zuck indicates that he believes his own company's BS or at least stands by it.
Why would anybody defend Zuck or Facebook? He hired the political activist that leaked all these documents, knowingly and expected a different result? Just like Google, these corporations don't seem to learn their lessons and act surprised when these activist employees betray them?
I wanted to share a note I wrote to everyone at our company.
---
Hey everyone: it's been quite a week, and I wanted to share some thoughts with all of you.
First, the SEV that took down all our services yesterday was the worst outage we've had in years. We've spent the past 24 hours debriefing how we can strengthen our systems against this kind of failure. This was also a reminder of how much our work matters to people. The deeper concern with an outage like this isn't how many people switch to competitive services or how much money we lose, but what it means for the people who rely on our services to communicate with loved ones, run their businesses, or support their communities.
Second, now that today's testimony is over, I wanted to reflect on the public debate we're in. I'm sure many of you have found the recent coverage hard to read because it just doesn't reflect the company we know. We care deeply about issues like safety, well-being and mental health. It's difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
Many of the claims don't make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us? If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we're doing? And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true. For example, one move that has been called into question is when we introduced the Meaningful Social Interactions change to News Feed. This change showed fewer viral videos and more content from friends and family -- which we did knowing it would mean people spent less time on Facebook, but that research suggested it was the right thing for people's well-being. Is that something a company focused on profits over people would do?
The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don't want their ads next to harmful or angry content. And I don't know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
But of everything published, I'm particularly focused on the questions raised about our work with kids. I've spent a lot of time reflecting on the kinds of experiences I want my kids and others to have online, and it's very important to me that everything we build is safe and good for kids.
The reality is that young people use technology. Think about how many school-age kids have phones. Rather than ignoring this, technology companies should build experiences that meet their needs while also keeping them safe. We're deeply committed to doing industry-leading work in this area. A good example of this work is Messenger Kids, which is widely recognized as better and safer than alternatives.
We've also worked on bringing this kind of age-appropriate experience with parental controls for Instagram too. But given all the questions about whether this would actually be better for kids, we've paused that project to take more time to engage with experts and make sure anything we do would be helpful.
Like many of you, I found it difficult to read the mischaracterization of the research into how Instagram affects young people. As we wrote in our Newsroom post explaining this: "The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced. In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse."
But when it comes to young people's health or well-being, every negative experience matters. It is incredibly sad to think of a young person in a moment of distress who, instead of being comforted, has their experience made worse. We have worked for years on industry-leading efforts to help people in these moments and I'm proud of the work we've done. We constantly use our research to improve this work further.
Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own. That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I've written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
We're committed to doing the best work we can, but at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services? How should internet services verify people's ages? And how should companies balance teens' privacy while giving parents visibility into their activity?
If we're going to have an informed conversation about the effects of social media on young people, it's important to start with a full picture. We're committed to doing more research ourselves and making more research publicly available.
That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you. That's the conclusion other companies seem to have reached, and I think that leads to a place that would be far worse for society. Even though it might be easier for us to follow that path, we're going to keep doing research because it's the right thing to do.
I know it's frustrating to see the good work we do get mischaracterized, especially for those of you who are making important contributions across safety, integrity, research and product. But I believe that over the long term if we keep trying to do what's right and delivering experiences that improve people's lives, it will be better for our community and our business. I've asked leaders across the company to do deep dives on our work across many areas over the next few days so you can see everything that we're doing to get there.
When I reflect on our work, I think about the real impact we have on the world -- the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I'm proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day.
Did anyone edit or proof this beyond maybe spelling an grammar? It comes across as incredibly sophomoric and petty. Moreover it reads like the nervous retort of someone who isn’t used to being questioned or contradicted. This seems like exactly the wrong tone at the moment.
Honestly as a tech person, I’m concerned that he really makes the rest of us look like shit.
That's fairly amusing given in the thread above this someone is absolutely certain that he didn't write it and that it was written by PR, Comms and lawyers.
This is about as "us vs. them" as it gets, which is to say that it's about as political of a statement as it gets.
I don't think any of this fight is about the actual issue of social media's impact, but perhaps I was naive to ever even think that it was about those issues to begin with.
I absolutely detest the polarization happening right now. I will however say that I deleted Facebook and Instagram about a year ago because I found myself comparing myself to my female peers to the point I was insecure, feeling inadequate, and becoming materialistic despite my highly successful medical career, stable personal life and abundance of positive happenings in my life. I think it can definitely affect women especially but anyone is at risk. Once I deleted I have felt much more secure and less preoccupied with these issues.
"When I reflect on our work, I think about the real impact we have on the world -- the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I'm proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day."
this is cherry picking a bit much, but I guess it's not unexpected.
Something critical I've learned in philosophy classes is critical reading of statements like this one. Looking for clever logic, weasel words, and misdirection.
Some examples:
> If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
Tobacco companies researched their own products also.
> ...widely recognized as better and safer than alternatives
Low-tar tobacco is widely recognized as better and safe than the alternatives.
> ... that many teens we heard from feel that using Instagram helps them ...
That doesn't mean that for many teens it hurts them. Just because narcissists love your product doesn't make it good.
> That's why we have advocated for updated internet regulations for several years now.
Updated just means "new version", not actually better. Lobbyists have NOT advocated for protections that would undermine profits, which is the very point this whistleblower is making.
"Hey multitude, how are you? I'm feeling some pressure and I wanted to say something to make me seem like a decent person for a change. Let's see if I get it right LOL.
First, we had a bad outage, but users are so addicted that they'll all return as soon as the monkey on their backs starts screaming. In fact, the little display on my desk says they are already back.
Second, now that I've stopped my red-faced rage fit about the testimony, I wanted to reflect on the public debate that makes me look bad, and by extension, makes you feel like a nest of weasels. Whenever we stop making money hand-over-fist for just a moment, we probably should make some kind of placating statement about issues like safety, well-being, and mental health. Then get right back to shoving ads in front of the zombies and mining personal data for billions!
Besides, many of the claims don't make any sense. If we were lying all the time, why would I repeatedly say "Oops, gosh I didn't mean to do that, it was an accident!" like a five-year-old child who doesn't understand consequences? Why would I, huh?
And if social media were as responsible for polarizing society as some people claim, then why are the factions so stable in the US? Year after year, we have really stable factions. That's a sign of a strong democracy.
At the heart (it's just an expression!) of these accusations is this idea that we love making money more than we love our zombie user-base. That's just not true. I mean, zombies are boring unless they somehow get over the security barriers.
The argument that we deliberately make people angry with our crap for profit is deeply hurtful. We make money from ads, and our advertisers consistently tell us they don't want to get caught and be associated with the garbage that they produce. Seems fair to me! Anyway, I don't know any tech company that sets out to build products that make people angry or depressed, apart from, you know, Microsoft, Google, Oracle, IBM, and so on.
But of everything published, I'm particularly troubled about kids. The reality is that young people use technology, but not ~our~ technology. We're deeply committed to addicting the little beasts to Messenger Kids. That's guaranteed future revenue.
But given the sh!tstorm that whatsername has caused, we've paused that project until people get back to fussing about other things and take the heat off of us.
It is incredibly sad to think of a young person in a moment of distress. They need a false sense of support. Give 'em an account and let God sort 'em out, I say.
Similar to balancing other social issues, I don't believe private companies should do more than pretend to have values and ethics. It's all too complicated and it interferes with making moolah. Let Congress make the laws, and let corporations bribe elected officials the way the democracy requires. We have these institutions for a reason.
If we're going to have an informed conversation about the effects of social media on young people, it's important not to get hit with the blame. So I'll be on the horn to elected officials and wiring money to their off-shore bank accounts. (How about those Panama Papers? I know, right! Crazy!)
I know it's frustrating to see our profitable work receive sustained criticism from weirdos. But I believe that over the long term if we keep making huge profits, people will eventually buy shares and stop their whinging. In the meantime, I've asked leaders across the company to do deep purges of any New Age kooks who might go rogue.
When I reflect on our work, I think about the money and control. This is why billions of zombies are hooked. Keep up the good work, turn up the beats, and nevermind the screams outside."
I think Zuck writes them himself and nobody has the cojones to edit.
Facebook has had a really consistent, weird, “true believer” voice over the years. I can’t believe they can keep indoctrinated PR folks around that long.
It’s pretty scary. The dude is too rich and powerful.
"We care deeply about issues like safety, well-being and mental health"
As always these people seems to prevaricate we all know he means "We care deeply about profits and any issue that can impede our growth."
"Many of the claims don't make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?:
C'mon you allowed the research in a hope that if they produce fruitful result you can show off to the world. And as you were trying to roll instagram for kids such research were essential. And you hide those research from public which means you don't care about child safety or whatever.
When Facebook was down yesterday, I was surprised to see so many people on HN posting about it with what looked like the nervous energy of people in low-level withdrawal, because there is so much antipathy about it in threads like this one. It's possible that there's totally non-overlapping sets of HN users in both cases, but realistically I think most people just have a complex relationship with it: when Facebook is offline, it's a major event in their lives, but when it's back up, it's worthy of nothing but overt contempt. What most people (here, and everywhere) can't seem to do is just minimize the footprint of Facebook in their lives.
This thing has turned into a crusade over a moral panic. What I want to know is why Facebook is being singled out. Twitter is infested with people that have hammer/sickle symbols in their bios. Reddit has a history of tolerating child pornography. What makes Facebook special?
> If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?
Maybe I am missing something obvious, but who is he talking about here?
Maximizing profit and shareholder value is the only thing that matters to the executives, and BOD of Facebook.
Just like tobacco companies, just like Exxon, just like every other company.
I’m baffled by the population’s belief that ANY company will act in any way but it’s own best interest.
Governments are supposed to protect their citizens, but instead have been co-opted by large companies over the threat of job loss, and the population agrees.
Every company will be as absolutely evil as the population allows, and it seems the population of most countries are completely OK with almost any behavior.
Facebook absolutely knows that hacking groups and nation states are using their platform as a mechanism to wage a disinformation war against targeted audiences.
You person standing on a box spewing nonsense amounts to nothing. A person standing on a box spewing nonsense with a crowd garners some attention. A person standing on a box spewing nonsense with a huge crowd garners more attention.
Facebook allows for the virtual creation of a person standing on a box with the appearance of millions and millions of people crowded around. Most people don’t understand the abstract nature of technology and assuming the virtual person is real, and the virtual crowd is real, and since humans are pack animals they just fall in with the virtual pack.
Facebook knows all of this, and they know how to stop it, but stopping it would damage their revenue and profits, and expose them as frauds which would ultimately destroy the company. Not wanting to destroy profits they hand wave, and release statements, but they know full well their platform is nothing but a profit and propaganda machine. Nothing more, nothing less, and the population doesn’t understand, or care. Apparently the masses need their opiates where they can find them.
I think you're dead wrong on this. Mark Zuckerburg is actually in an almost unique position that he has full control of facebook, so he can do literally anything he wants. He doesn't have to prioritize profit or shareholder value, he could sit in his board meeting just continually insulting his board's family members. He really does believe in what he's doing.
The problem is that sitting in your basement coding in PHP isn't going to give you the right grounding in psychology, sociology, and politics that runninng a social network will need. Being a billionaire at the age of 23 also doesn't give you a great lesson in how to learn from mistakes, and take criticism. All the way through his management of facebook Mark has been getting dragged through lessons about life that only a teenage boy in his bedroom needed to learn because he never got a life to learn them.
the reality is that people would stop using facebook if they really hated it that much, and yet it still has 2.5 billion MAUs. there's a very loud minority of people who keep screaming "facebook bad!" because the NYT told them to (because facebook is eating their lunch). facebook doesn't inject your brain with foreign chemicals, it doesn't hold a gun to your head and force you to scroll through your timeline. don't like using facebook? close your fucking web browser.
I think a lot of people don’t have the self awareness to realize the impacts on their own personal life. Many people aren’t able to be introspective or even fathom the possibility of potential defects of the internet or social media. It’s one thing to say the thing itself is bad. It’s another to say what’s it’s doing to them is bad and they need to initiate change for themselves. People get addicted to habits/ mindsets and that’s just as powerful as drugs.Understand your audience.
That being said, a large number of people are also pleased and are either able to use it without being affected by the negative effects or being ok with them/ not believing they are being harmed. I believe technology and the ability to connect can be a powerful and positive force when used in a healthy way.
The reality is that people would stop drinking soda, smoking cigarettes, binge drink alcohol, or sit in a chair 12 hours a day, if they were really that bad for you.
I don’t like to use Facebook, but extended family events are often planned on Facebook, local businesses sometime only use Facebook and Marketplace has become a popular alternative to Craigslist.
As usual, it’s not that simple. It should be ok to complain about Facebook and ask it to be better.
it's okay to complain about facebook and ask it to be better, but look at the situation we're in. people are claiming zuckerberg is responsible for the capitol insurrection and the genocide in Myanmar. we've reached full-on hysteria at this point, facebook derangement syndrome is real.
Every argument of the form "why would we piss off advertisers" or "why would we make our users unhappy" is void, because they are a monopoly. The mechanism that would make them responsive to customers is completely absent. Everyone hates Facebook, because it sucks. And everyone uses it, because it's a product with the most powerful network effect conceivable.
There is a way to have competition, and that is to create a standard for interoperability between competing social media platforms. In theory, the US government could legislate something like that, but in practice that's unthinkable.
So it has to come from Facebook itself. That's the only thing that could convince me that Facebook gives a shit about it's users. If you really think people are with Facebook because they like it, prove it. Let them leave. Give them the choice.
I don't ever ( except this ) post here and don't use Facebook and trying my very best to get all my contacts to at least try something different other than whatsapp. I've not read the statement but I'm pretty much sure I know the gist of it.
The easiest thing I/You can control is what apps are installed on your phone and I choose to not have them.
I find Facebook and Instagram is incredibly toxic walled garden even just a lurker like me.It is fair to say that most people have Facebook/Instagram for fear of missing out on some event of some kind.
I suppose why I posting is that no matter how much bad Media/press Facebook gets people will be people and just continue on. I even think mark Zuckerberg could shoot someone on 5th Avenue and still people would use Facebook.
Anyway rant over.
Marc Benioff was right. Facebook is the new cigarettes. The internet has transformed our lives and societies in dramatic ways. Some of them for the better, many of them for the worse. At the heart of the damage being done today, you will find none other than social media, and in particular, Facebook.
I haven’t seen any counterarguments to these two points he has raised.
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
> Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own. That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I've written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
Not surprised to see Mark get all defensive. It is very simple to analyze this objectively. They want to make huge profits and they want people to use FB as much as possible. Fine. That's their right. They are neither evil nor saints. Similarly, use of the service has toxic effects on many many of its users especially young ones (no matter what Mark and their team says). That's obviously not good but also must be recognised as a voluntary choice to engage in a platform that causes mental health issues. If we look at it straight, my opinion is that the solution isn't really regulation but merely that more of us need to stop using social media and parents should not allow kids to have an account until they are adults.
Honestly I have been so happy with the unreasonable effectiveness of RSS readers. It’s just incredibly delightful, and wish it weren’t as niche. However, perhaps I should revel in its ability to fly just off the radar.
> The deeper concern with an outage like this isn't how many people switch to competitive services or how much money we lose, but what it means for the people who rely on our services to communicate with loved ones, run their businesses, or support their communities.
The messaging between Facebook and media outlets here is so consistent (Facebook's services are critical communications infrastructure, people get hurt when they can't use them) that I wonder whether Facebook was actively pushing that message during the outage to get media to focus on the impact on people's lives (such focus being overall positive for Facebook, by showing how important their services are to people) rather than the failure.
I just don't buy it at all though. In terms of person-to-person communication, the friction of replacing Facebook/WhatsApp/Instagram is insignificant. Where I live, Messenger and WhatsApp are the most popular chat apps by far, and are used for everything from personal communications to ordering food to business negotiations. During the outage, people didn't wring their hands and despair at not being able to communicate. They just used a different app.
Some of this highlights an issue that I think about a lot (and it's obviously not just me): the duty of corporations to maximize profit of shareholders (as represented by the majority vote of the board, as I understand it) over all other concerns.
Zuckerberg writes this:
> At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true.
Last I heard, he owned the voting shares of the company by a razor thin margin (~ 53%). If he owned even 0.1% less than half of the company, wouldn't this sentence be confessing to unlawful activity?
Unless, of course, we take a legalistic reading that he really means they maximize profit in the short term, but make it up in the long term by good publicity and such. But that's hardly the natural reading of what he wrote. It seems to me like he's ~ 3% of an ownership stake away from a situation where he'd be confessing to violating shareholder rights even by writing that--let alone being responsible to run the company in a way that actually does prioritize profit over safety and well-being. All of this to me seems like a flaw in the American system of financial law.
> if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world
I wonder what countries he is looking at. Polarization is definitely happening all over the world, including the heaviest social media users of all (Brazil, India, etc).
> And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
History is literally repeating itself. Just like 2016 US elections, he’s denying FB’s role again
"The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced. In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse."
Does this translate as "Instagram is good for kids" though?
I mean, as a kid/teenager/young-adult I'd tell you chocolate and burgers are great sustenance and that there's nothing wrong with playing video games 16hours a day sitting down, etc...
I'm genuinely curious if that research is based entirely on feedback from the target audience, if so on instinct, I disbelieve it.
> That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations.
Who's this supposed to benefit?
Facebook is inveritably a part of PRISM, so it's not like these bills are going to be drafted in favor of the American people (much less those who use Facebook internationally). If anything, I'm expecting a "Curbing Online Terrorism/Child Harassment/Hate Speech Act of 2022" to be right around the corner, granting our great and benevolent government even more unfettered insight into our lives. It's not very surprising though, I don't take Zuckerberg for a strong-willed man. I believe that he'd sooner plunge our nation into a surveillance state than giving up his mansions and yacht.
If the business model is fundamentally wrong no amount of good will, governance or resources can turn things around. The only way (I can think of) for private, for-profit, social media to "trully care" (actually, just care a tiny bit more) is if their users (rather than advertisers / other third parties) were to become the paying clients.
This is about as feasible as a brain transplant. People have been conditioned that social media are "free", the gigantic valuations are based on personal data based adtech revenue, their true clients don't seem to give a damn about what kind of society and economy they engineered, and "markets" just cheer on.
It's a good statement, I just wish he'd spend more time on 'yes, I stole the original Facebook, and I tricked the Whatsapp founders into giving up their company to me, and I've routinely lied in the past, but moving forward...'
Seriously, as someone who does not use Facebook because I think it is harmful, I think we should put ourselves in his shoes for a second. There is no right way to answer Facebook scale questions, seriously.
> "The deeper concern with an outage like this isn't how many people switch to competitive services or how much money we lose, but what it means for the people who rely on our services to communicate with loved ones, run their businesses, or support their communities."
...and the deeper concern from the rest of us outside of Facebook is how the DNS servers can become overloaded with requests when a big system like this goes down.
I don't get why Twitter always gets a free pass on the harms from misinformation. I have seen so much fake news around vaccines, election integrity on there not to mention that it's the nexus of woke cancellation mobs. I think it's because Twitter is seen as doing more (banning Trump, putting notifications on posts), but is that actually the case relative to what FB is doing?
Isn't twitter just some sort of tiny echo chamber? Don't we regularly see these studies claimg that only 2%of the users generate 99% of the content? Twitter is only harmful to businesses who hire marketing firms who think twitter is representative of the general population.
My anecdotal impression is that YouTube and Twitter played a big role in spreading election misinformation. The main basis of the Kraken conspiracy theory came from a YouTube video which was spread on Twitter and I remember numerous election integrity fake news tweets getting hundreds of thousands of retweets from a large right-wing echo chamber. It may have been worse on FB but I do think the asymmetry of attention and scrutiny is far out of whack.
I'm going to guess the reason is that congresspeople and journalists see Twitter as their domain of control now and benefit from the privileges of a blue check mark and the other side being kicked off. Probably Zuckerberg's lack of charisma and physical appearance isn't helping either, however unfair that may be. Another reason might be that the FB newsfeed lacks transparency unlike Twitter which contributes to distrust.
> I don't get why Twitter always gets a free pass on the harms from misinformation.
Twitter is where all main stream journalists are. They are not going to bite the hand that "feed" them. Facebook on the other hand has still some amount of non political correctness on their platform so that's something main stream media who wants to control speech online cannot stand.
> I think it's because Twitter is seen as doing more (banning Trump, putting notifications on posts), but is that actually the case relative to what FB is doing?
Twitter certainly has a tighter moderation when it comes to "moderating" alleged conservative speech, yes, which is what matters to the former camp.
Am I the only one who’s annoyed that the message is gated and that FB somehow needs to track everyone who has seen what I imagine amounts to a mea culpa for their clusterfuck of a week.
It’s so weird reading about Web3 and the focus presumably on digital sovereignty and privacy and to see a platform moving backwards and away from any semblance of an open internet.
Value of data owned by FB is precious today and this still growing in value. The problem is as a user you are taken for granted. You have no information as a user on what research studies your data is used or will be used in the future. I do not think FB realizes the ethical responsibility of holding such valuable data.
Would Ethiopia be in war now without nowadays radicalization web 'social' platforms?
Its government shut down the Internet several times: officially to save lifes, while oppositors says it's just to perpetuate violence.
We are facing big challenges here, where the danger to arm kids is just a small part.
> Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own.
Then give up majority control.
This article needs a listen in compassion. It's all, "no we're not", instead of, "we understand why people feel this way and this is what we need to do"
Facebook needs to kill the cash cow: the Open Graph enhanced optimized news feed. When FB was used for sharing pics of your kids or seeing what your friends were up to at school, there wasn't any major issues with sedition that I remember.
Then they added the news feed. Questions:
1. Why can't users turn off the news feed permanently and only see user-created posts?
2. Why can't users organize/sort their news feed how they like rather than being "optimized"?
3. Why isn't Facebook making sure to include articles from respected news sources (NPR, NYT, WSJ, Axios, etc.) in everyone's feed?
Why doesn't FB do any of these obvious solutions to bring everyone out of their silos and set a baseline of truth again? Because all those "kind and thoughtful" people working there would lose millions. That's way more important than the future of the country.
Edit: Also, every media outlet in the country are hypocrites as well. They shouldn't allow sharing on Facebook until they clean up their act, or at the minimum, stop using Open Graph on their web pages, making the share a simple URL.
I got to the fifth insurance of'industry leading' and had to stop reading. The general sentiment recorded makes sense but the language used at various points reads as if it were run through a PR ML bot.
> Facebook and subsequently Instagram thrive on interaction. The company is only profitable if it keeps you on its platform and it will do anything to keep you there, even at the cost of your own health.
The real problem from Facebook's point of view is the fallen nature of humanity; as soon as goodness and light are met with more engagement Facebook will turn to promoting those qualities.
Hah. Loading the page then results in a giant picture of Zuck blocking the content inviting me to "See more of Mark Zuckerberg on Facebook" by logging in.
Without money behind it, the full-blown assault on knowledge and erudition would go quiet on all fronts, not just on Facebook.
So, the solution is simple: we to ban paid-for speech. If you can staunch the flow of money into social media by way of troll-farms and political ads, I think this would help a lot. Even better if you could ban so-called "news" organizations that are little more than for-profit PR firms. At the very least, we need spend real money to expose and eliminate troll farms, including attacking such organizations in their home countries.
Mark, you hired, just like Google and other IT companies did in the midst of the Trump presidency, political activists and now you are surprised these political hires are compromising your business secrets and betraying you? Read their manifesto, they are here is subvert and destroy, not get you good easy PR.
The Silicon Valley can't have it both ways and it's a situation they certainly created, by trying to pander to political activists just for PR reasons but now they acting all surprised when political activists are doing exactly what they say they would do in public or social media? subvert and destroy your business if you don't submit to their need for censorship?
well, prepare yourself for a bunch of heavy regulations as a result, and it will affect ALL STARTUPS as a result.
>If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?
And yet I have family who are absolutely convinced there are nanobots in the vaccine.
I'm seeing a lot of deflecting without addressing the core issues. Yeah we know they have a lot of people and spend a lot of time on content moderation, but one major issue is that they gave a lot of people exceptional treatment. It took years of Trump's reign and that of 'fake news' / misinformation before they even started to flag these things, and I'm convinced flagging content works as reinforcement for a lot of people - "look, the MSM is trying to suppress the Truth!"
The statement for that those that don't have fb accounts or don't want to login:
"I wanted to share a note I wrote to everyone at our company.
---
Hey everyone: it's been quite a week, and I wanted to share some thoughts with all of you.
First, the SEV that took down all our services yesterday was the worst outage we've had in years. We've spent the past 24 hours debriefing how we can strengthen our systems against this kind of failure. This was also a reminder of how much our work matters to people. The deeper concern with an outage like this isn't how many people switch to competitive services or how much money we lose, but what it means for the people who rely on our services to communicate with loved ones, run their businesses, or support their communities.
Second, now that today's testimony is over, I wanted to reflect on the public debate we're in. I'm sure many of you have found the recent coverage hard to read because it just doesn't reflect the company we know. We care deeply about issues like safety, well-being and mental health. It's difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
Many of the claims don't make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us? If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we're doing? And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true. For example, one move that has been called into question is when we introduced the Meaningful Social Interactions change to News Feed. This change showed fewer viral videos and more content from friends and family -- which we did knowing it would mean people spent less time on Facebook, but that research suggested it was the right thing for people's well-being. Is that something a company focused on profits over people would do?
The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don't want their ads next to harmful or angry content. And I don't know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
But of everything published, I'm particularly focused on the questions raised about our work with kids. I've spent a lot of time reflecting on the kinds of experiences I want my kids and others to have online, and it's very important to me that everything we build is safe and good for kids.
The reality is that young people use technology. Think about how many school-age kids have phones. Rather than ignoring this, technology companies should build experiences that meet their needs while also keeping them safe. We're deeply committed to doing industry-leading work in this area. A good example of this work is Messenger Kids, which is widely recognized as better and safer than alternatives.
We've also worked on bringing this kind of age-appropriate experience with parental controls for Instagram too. But given all the questions about whether this would actually be better for kids, we've paused that project to take more time to engage with experts and make sure anything we do would be helpful.
Like many of you, I found it difficult to read the mischaracterization of the research into how Instagram affects young people. As we wrote in our Newsroom post explaining this: "The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced. In fact, in 11 of 12 areas on the slide referenced by the Journal -- including serious areas like loneliness, anxiety, sadness and eating issues -- more teenage girls who said they struggled with that issue also said Instagram made those difficult times better rather than worse."
But when it comes to young people's health or well-being, every negative experience matters. It is incredibly sad to think of a young person in a moment of distress who, instead of being comforted, has their experience made worse. We have worked for years on industry-leading efforts to help people in these moments and I'm proud of the work we've done. We constantly use our research to improve this work further.
Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own. That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I've written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
We're committed to doing the best work we can, but at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services? How should internet services verify people's ages? And how should companies balance teens' privacy while giving parents visibility into their activity?
If we're going to have an informed conversation about the effects of social media on young people, it's important to start with a full picture. We're committed to doing more research ourselves and making more research publicly available.
That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you. That's the conclusion other companies seem to have reached, and I think that leads to a place that would be far worse for society. Even though it might be easier for us to follow that path, we're going to keep doing research because it's the right thing to do.
I know it's frustrating to see the good work we do get mischaracterized, especially for those of you who are making important contributions across safety, integrity, research and product. But I believe that over the long term if we keep trying to do what's right and delivering experiences that improve people's lives, it will be better for our community and our business. I've asked leaders across the company to do deep dives on our work across many areas over the next few days so you can see everything that we're doing to get there.
When I reflect on our work, I think about the real impact we have on the world -- the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I'm proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day."
> If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
> If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we're doing?
This is HILARIOUSLY out of touch with reality.
You don't call someone a whistleblower if the reports they leaked were TRANSPARENT and PUBLIC.
Seriously what is wrong with this man? Does he have any clue whatsoever what impact that his majority control over Facebook has over the behavior of the world?
Answering a suppositional question with yet another question is, well, questionable.
But, hey, if we're going to play games like this, then I'm game to take a stab at answering these questions to questions with yet even more questions.
> If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
- Because it allows you to learn potential results before anyone else, thus giving you the advantage to control the narrative by releasing first?
> If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?
- Because the cost of that labor is relatively cheap in comparison to your earnings, and making a visible effort (however well or poorly executed) gives you a convenient scapegoat to point toward in exactly these types of situations?
> If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we're doing?
- Because if you don't make it appear like you're playing ball, senators and congressmen would be more motivated to hammer down your door to appease their constituencies?
> ... if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
- Because you don't apply the same algorithms or suggest the same content across all geographic locations?
>If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
I wonder why tobacco companies studied lung cancer and oil companies studied climate change. Was it because those industries thought those issues were more important than profit?
> Was it because those industries thought those issues were more important than profit?
I take a more cynical view - they wanted to get ahead of the narrative before the public did. It meant a better PR angle, a more well thought out strategy to thwart external pressure, and a better forecast on how long you could milk the cow.
To be clear, I was being sarcastic. I agree with you and don't think it is in anyway cynical. It is the obvious reason. It allowed these industries to change the public discourse regarding these issues. For example, the oil industry was a big force behind the "personal responsibility" angle of fighting environmental problems trying to shift public perceptions from blaming the oil industry to blaming individual consumers.
I'm so baffled by Mr. Zuckerberg's response here: the core complaint is that research was sidelined and deemphasized, not that it didn't happen at all.
We can both be satisfied Facebook is seriously researching its impact on society, and also appalled that it has been too slow to act on the results of serious internal research. Our complaints with Facebook are complex; Mr. Zuckerberg is giving such a naive response.
Indeed. Was important for them to show a different narrative that let people think that tobacco was good and oil was not affecting the environment that much.
So same for Facebook I guess, they want to show us that they care and that they're not the evil here.
There are plenty of unbiased independent research that show the opposite of what they keep claiming
What “does his majority control over Facebook” have to do with the accusations about Facebook’s behavior? Would the behavior be ok if it were majority controlled by mutual funds?
> What “does his majority control over Facebook” have to do with the accusations about Facebook’s behavior? Would the behavior be ok if it were majority controlled by mutual funds?
It has to do with the fact that if you control something, you are directly responsible for it. By having sole control of FB, MZ is completely responsible for it, for better or worse.
It also means if the board was comprised by a diverse set of people (which it is) they could oust him if they wanted (they can't). The public could put pressure on those people much easier than putting sole pressure on him.
The implicit assertion is that this behavior wouldn't happen if not for Zuckerberg personally causing the behavior.
As someone who has worked at large tech companies though, I find that to be an extremely questionable assertion.
FB has incentives. Zuckerberg didn't invent the dynamics surrounding their business, and I don't see how having a faceless bureaucracy in charge would lead to an organization that is more willing to reject its own incentives.
If anything it would seem like having more obscure and diffuse leadership would lead to less accountability, not more.
Here's what's new: she is explaining the situation to politicians of both parties in terms of the perverse incentive structure at Facebook, and successfully diverting the conversation away from whether it should censor more Democratic or Republican content, toward the real issue: that no one has both the resources and the information to assess and improve safety on the platform.
Also, industry-leading research? Those leaked internal data about instagram are worse than an undergrad term paper. Look at those n values, for something as big as Fb that’s not even serious research.
Same reason the tobacco industry created research programs to look at the health effects of smoking. Neither their nor Facebook's research could ever be mistaken of "industry-leading", though.
Big tobacco's _published_ research reflected their product favorably.
Maybe a better example is oil, where because of legal issues we were able to peer into unpublished research as the public. They indisputably knew about anthropomorphic climate change back in at least the early 80s and continued to push policies to the contrary because otherwise was against their bottom line. They just used their research in order to understand what was coming and create propaganda ahead of the curve.
As someone else said in the comments already, today's testimony wasn't about whether or not Facebook does research, it was about them hiding the results when they didn't support whatever drivel FB marketing puts out about changing the world for the good of all mankind.
This is so incredibly on brand for Zuckerberg (and Sandberg). This sort of gaslighting FUD is exactly what they always do when they get caught lying. They just lie more.
“At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true.“ yeah mz, whatever. just goose my stock options good this year for the holidays and we’re cool.
so many CEOs would come back with tail tucked between their legs when there is an aggressive PR attack, it's so refreshing to see someone unapologetic for a change, standing their ground against blatant lies and angry mobs with pitchforks. This gives hope in the American future.
Hey, can you help me understand why you consider the comments and media around this statement a blatant lie? Im absolutely in support of people speaking their mind, especially in today’s culture but Im not sure why the public opinion of there being legitimate effects from social media on people, specifically Facebook, would be a blatant lie in your eyes?
what they call "divisive and extremist actions" that need to be censored is basically anything which doesn't agree with their political ideology. It's a direct attack on free speech.
We've banned this account for using HN primarily for ideological battle. That's not what this site is for and it destroys what it is for, so we ban such accounts regardless of what ideology they're battling for or against.
Please don't create account to break HN"s rules with.
Zuckerberg IS facebook. He controls it. Who's to say even if the main FB app and IG app and WhatsApp apps all lost their entire userbases and revenues went to 0.05% of what they are now, that he wouldn't just run the 500 person skeleton of what they currently are and just focus on his latest pet project, whether it be VR or something else. Honestly that's what I'd do if I was a billionaire with a passion for technology and a failing company but with millions in cash reserves.
Is there a social media that's actually "better" than Facebook? It's still not clear to me why Facebook is so much worse than Twitter, youtube, reddit, tik tok, etc. I think any social media, with significant adoption will have the same issues that we observe with facebook right?
Commander BGP (aka DATA) entered the "sleep" command. The faceborgs found they can't enter building, connect to the collectives or take over the Galaxy.
> We're committed to doing the best work we can, but at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services? How should internet services verify people's ages? And how should companies balance teens' privacy while giving parents visibility into their activity?
Oh fuck, here we go. This asshole is going to be the death of so many good things if we're not careful. Let's flip the questions a bit.
How should internet services verify people are teaching their children how to be adults properly?
Very calm in tone. In Zuckerberg's place, I would have been tempted to instead write something like this: "Hi, censorious ninnies. In the 50s you would have been blaming godlessness for hurting children, in the 90s you would have been blaming video games for hurting children, now you blame Facebook. Please take your desire to police content on Facebook even more than we already police it and shove it up your own ass."
I have no love for Facebook, but my problem with it is not that I think they police content too little, it is that I think they police content too much. No, I do not want to police content on Facebook to save children. Similarly, I do not want to ban controversial books, I do not want to ban extreme sports, I do not want to ban fast food, and I do not want to ban many other things that sometimes hurt people. If I have to choose between, on the one hand, Facebook with all of its shadiness and, on the other, puritanical moral busybodies who want to control what people see "for their own good" - well, I choose Facebook.
Facebook is an awful environment, but this political witch hunt is many times more dangerous than anything social media can do.
It's a charade, parading out the typical "threat to children" and "dangerous to society" rhetoric politicians always use when they want to grab power. The entire goal is to make a pretense to control speech online.
Stop using Facebook, and stop falling for authoritarian claptrap.
The best I can come up with is that Facebook is so big that the "evil" is an emergent property of all the different things that are happening. It's so big no one can comprehend the big picture of it all, so while the individuals involved have good intentions with what they are working on, the sum total of all employees' intentions ends up broken.
So maybe Zuck is telling the truth here, that they are trying to fix all this. But no one can see the forrest from the trees.
I can't reconcile it any other way.