Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

25% is a completely arbitrary number and context dependent. I can guarantee you can find many communities that have majority views that you find abhorrent and would not want to be a part of your community discourse. The problem is that social media gives the illusion of a broad town square where all opinions are heard, but that is not what happened. Everyone on social media is filtered into silos based on what the algorithms predict they will find engaging. In such an environment, it is not hard at all for malicious actors to propagate incendiary lies and exaggerations that metastasize into political beliefs. A fringe belief can easily become mainstream if it is amplified unchallenged, which is exactly what happens every day on social media.


The thing is, these services are exactly that. Services.

What the consumer wants from those services is "free speech", but with restrictions. They want "uncensored" content with the objectionable bits removed. For some people "objectionable" means spam and pornography, for others it includes certain types of political discourse or content from certain classes of person. If people really wanted uncensored content, the dark web would be far more popular.

The only way these companies can give people both uncensored "free speech" and content moderation is to build these bubbles where freedom of speech is only freedom of one type of speech.

They're stuck in a catch-22, and I can't help but feel like they actually ARE providing the service that we demand from them to the best of their abilities.


Yes, it's arbitrary, 20 would also make the same point, 0.1 would not.

If you're saying "we must censor abhorrent viewpoints for the good of society", I'll just counter that your viewpoints are horrible and must be suppressed, while mine are good and must be amplified. For the good of society.


Sounds good.

Now build a Facebook and get enough users to rally to your cause, and your opinion on suppression / amplification will have some weight to throw around.

People seem to forget that Facebook is where it is because users keep showing up, and users keep showing up because the censorship gives them something they want. It's a feedback loop.


"Might makes right, sit there and take it" might be how the world works in some ways, but it's not exactly a moral cause.


What does "might" mean in this context? Nobody showed up with a gun to force people to make a Facebook account.

It is, at worst, "popularity makes right." Which, to be clear: there are philosophies that take significant umbrage with that (there's a reason the US government isn't a strict popular vote for every position).

But the complaint seems to boil down to "I want people to go do something else because... I know they should." Not exactly compelling. People know themselves better than strangers do.

This isn't a claim that might makes right. It's a challenge to replace theory of how people want to engage with the world with practice. I suspect (because we keep seeing the same patterns over and over) that a replacement for Facebook is going to either not catch on like Facebook did or is going to find the need for heavy-handed moderation at some point in the not-too-distant future.


The idea of political manipulation and/or censorship on social media only became a thing after 2016, and even then it took time to ramp up.

All of the incumbents were established with network effects long before then, they are very sticky and unaccountable. Not Ma Bell level of natural monopoly but the network effects are pretty strong.

Look at Twitter under Musk, your standard beltway liberal type still uses it even though they hate him.


"Network effects" are a cheap excuse for people to not put their money where their mouths are.

I don't personally have a lot of respect for the people still using Twitter. I deleted my account before Musk bought them when they responded to a notorious TOS violator being elected President by changing their TOS.

All we have to do to hold the incumbents accountable is log off their service and log on to another one. Nearly 100% of the power in this situation is in the hands of the users.


There aren't any views I want to exclude from public discourse. Moderating so that they are expressed w/o all-caps profanity is one thing, but the views themselves ought to be protected. As far as false facts go, it can become treacherous to draw an exact distinction between the false and the disputed for many subject areas. X's "Community Notes" are not perfect but in practice have been surprisingly helpful and accurate in my experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: