Hacker News new | past | comments | ask | show | jobs | submit login

Gotta pay the team of people that wholesale censor information there



the mods are even worse than admins in many respects. mods for popular subs have considerable control over what is allowed or not. Many subs have enormous blacklists of users,domains,and even words. As well as stupid, arbitrary content guidelines pertaining to length, the title, the body, and other stuff. Reddit admins have the most power but they tend to not get involved unless a sitewide rule is broken.


I've been permanently banned from almost all of the top COVID related subs for daring to discuss disallowed topics, like the Wuhan lab, as well as expressing anti-lockdown sentiments. My appeals have been instantly denied as well.

I understand the idea that subreddits are user-run communities and should be able to self-moderate, but there are a couple problems with this:

1. Reddit claims to be the "front page of the internet" and promotes many of these top subreddits with front-page rankings, user suggestions, and push notifications. Therefore, reddit mods have immense power in controlling the flow of information to those who believe they're interacting with a reputable source, and Reddit corporate has washed their hands of any responsibility to have a say in this, despite the profit they receive from it.

2. Reddit corporate certainly is willing to get involved with subreddits, by banning or quarantining certain subs for spreading "misinformation", but this seems selectively applied to one particular side of these issues. Anti-lockdown subreddits are banned for downplaying Covid, but major Covid subreddits openly feature fear-mongering posts that overplay, say, the risk of the virus to kids. Both are misinformation, but one is allowed and and the other is banned.


Reddit was a bastion of free speech but it's suffered a steep decline in quality and content. It's a toxic place with heavy handed dark UI patterns: they've lost their minds over there.


> Reddit was a bastion of free speech

Big emphasis on "was".


This seems more like a Silicon Valley issue than a Reddit one. And Silicon Valley generally doesn’t have much motivation to get people doing actual stuff, people sitting at home is where the cash is.


The nice thing is as alternative tech platforms rise up (communities.win, ovarit.com, ruqqus, rumble, etc) we will see a more decentralized web and these tech giants will soon be forgotten. Hopefully, they'll take their toxic VCs with them, but I think that may be a pipe dream.


I was going to say, there are plenty of subs for pedaling that kind of content, but looks like the major one, r/NoNewNormal was just quarantined (ironic) today, which is often the first step towards a total ban. Reddit's a private company, and if they don't want that kind of content on their site, shouldn't that be their choice?


The problem with the "private company" line that, yes, they're a private company, but censorship is definitely still bad!

Free speech is a fundamentally protected right in the US. From the Founder's perspective, the greatest threat to that right was clearly and obviously government, because what other entity could possibly have that much reach into one's life? Up until the internet age, no one could imagine a private company or a private individual having the capacity to infringe upon free speech at scale.

So, we have a Constitutional right to free speech, protected against infringement by the government, which is great, but there is another threat to the free flow of information and ideas, and that is private corporations who can now infringe upon this right at scale. And we don't have the tools or framework to defend it, because private companies can do what they want? Thats not good enough for me. The situation is dire when private companies appoint themselves to be the arbiters of truth, because even with the best intentions, there are bound to be mistakes, as we've already seen. And they don't all have the best intentions.


You post about free speech but appear to be quite ignorant about the topic. Example why is it considered a crime to yell “fire” in a crowded theater? Educate yourself and stop posting annoying banal rhetoric.


That’s all you have to say? The fire in a theater example (and you call me banal)? You really can’t think of any other form of speech that has been unfairly censored by social media sites?

Are you just not paying attention?


> you call me banal?

Your reading comprehension seems a bit lacking, casting doubt on your other statements. To be clear I am asserting that your claim censorship by private companies is unjust; that is banal, annoying, and ignorant.


Forget about “just” or not.

Do or do you not think it’s a good thing for private individuals to be able to unilaterally censor what can be said in what really is the new public square?


The fact you perceive any particular message posted to any particular for profit social media site as being a “public square” is more the result of successful marketing and ignorance than reality.


It clearly and obviously is a public square. So is Hacker News, so is Facebook. So is Twitter.

Its where people go to get news and share ideas.


The web is a public square.

Hacker news, Facebook, Twitter, Reddit etc. are private properties surrounding the public square.

You can do what you like (within the bounds of the law, because even public squares have rules) with your own property on the web, but you also have to follow the rules of any other establishment if you enter.


I’m not sure why this is so hard for pageandrew to grasp. I assume they think since anyone can register and use these sites, they think that makes them “public” when in fact people who register with and use these sites are paying for the privilege; with their personal data, their attention, their clicks, and their manufactured outrage that drive engagement, precisely what pageandrew is trying to do here.

User pageandrew can set up their own stall by the web public area and publish whatever they wish, discuss whatever they wish with whomever they wish, also censor whoever they wish. It’s their stall, their property. Just as twitter etc are the property of others. Pageandrew.com will most likely not have the views or clicks though, and that’s what they need to peddle their outrage.


I'm not calling pageandrew out personally - it seems like many people on this site can't seem to see the forest for the trees in this regard. The narrative that a few FAANG sites have completely centralized and commoditized the web to the point that they have de facto become the web is necessary for the fear narrative of a vast leftist conspiracy controlling the media and persecuting free thinkers, but also completely wrong.

Look at Youtube. The platform has been demonetizing and deplatforming accounts for obscure and opaque reasons for years, and not just Conservatives and gun videos. The result has been people advertising their non YT content on alternative platforms. If enough people are upset with the way a platform does business, they go elsewhere.

People get kicked off of 4chan, they move to 8chan. They get kicked off of Reddit, they go to Voat. Plenty of platforms serving the persecution complex of modern Conservatives, Trump supporters, incels and the alt-right have shown up on Hacker News. Gab and Parler still exist, and are still cesspools of free speech. And that's just on the open web.

Censorship at the platform level is not a problem, because there can always be alternative platforms. Censorship at the network level is a problem. That's why I'm vehemently opposed to arguments that the government should step in and regulate all social media platforms, or force them to publish content against the will, or require a judge to sign off on any moderation action. I'd rather have parts of the web play by rules I don't agree with than have the whole thing play by one set of rules at the point of a gun.


> Example why is it considered a crime to yell “fire” in a crowded theater?

Is it?

https://www.popehat.com/2012/09/19/three-generations-of-a-ha...


Probable summary: Of course it's not, and that was an argument used to jail antiwar socialists handing out pamphlets.


Plenty of subs get quarantined and then don't get banned even years later.


Giving fringe viewpoints a megaphone and that kind of validation is a terrible idea. We have seen it be abused and fail us consistently over the last decade in both mainstream and social media.


I don't trust large corporations to decide what is fringe and what is not, because not only is it a slippery slope, it has been abused consistently especially over the last few years.

Look no further than Facebook's handling of COVID "misinformation". From the start of the pandemic, they have enshrined the current statements of the CDC as "information" and everything else as "misinformation". For example, when the supposed "scientific consensus" was that COVID-19 100% came from nature, anyone who suggested it could have been manmade was called a conspiracy theorist and banned from the platform. Now, a year later, those scientists are backtracking, and some are even admitting that they took the natural-origin stance simply to not be associated with Trump, who was taking the other side. That's not science.

It was never a crazy idea. Anyone who could think for themselves knew that. But Facebook declared themselves the arbiter of truth, and decided that anything that wasn't said by the CDC needed to be censored.

When reasonable, logical ideas about important global health challenges are considered "fringe" simply because the authorities have declared them to be fringe, we have a serious problem.


Because the "man made" conspiracy was not based on any evidence or research. They were promoted by media personalities with no scientific backgrounds spreading a false narrative deeply rooted in white nationalism and helped to promote anti-Chinese hate crimes.

Obviously real scientists and social media platforms didn't want to be associated with or help to promote that at the time.


Was the front page of anything ever not curated?


Theres a difference between curating the best recipes or cutest cat pics, and censoring certain ideas about pressing global health crises because they don't fit a certain narrative.


I mean things like newspapers or respected journals. I don't know of anything touted as a "front page" that's been entirely organically generated.


[flagged]


I'm glad you're here. If there's any chance that connecting would be helpful, you're welcome to email hn@ycombinator.com and I'll send you my personal email.


Well, of course, knowing that people out there are suffering with no ways way out triggers basic human pleasure.


Are you announcing a murder-suicide here? @dang


I understand how alarming it is when people post on such an extremely painful topic, but probably this is a place to recall the site guideline that says "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." I can think of other things bserge might have meant.

https://news.ycombinator.com/newsguidelines.html

p.s. @dang is a no-op, so if you want to alert me/us to something, please email hn@ycombinator.com. I just happened to run across your comment.


Got ya, sorry!


It would be stupid of me to do that. Besides, death is easy, living in suffering is where the real hell is. And it's much easier to achieve, would you look at that.


I mod a surfing video sub, and I censor the heck out of wakeboarding videos! Oh, the power I wield! Mwahahaha!


That's what make reddit great. Certain reddit are heavily moderated and you get great content.

If you don't like how moderation done, go start your own reddit.


> go start your own reddit.

This is actually what makes reddit great, that anyone can start their own subreddit, mod it however they want, and compete with all the others based on the merits of its own content, community and moderation style.


I think the worst part is how things are silently removed. On many sites you get a message saying 'your content has been removed for violating rule x'. On reddit you almost always get nothing. Often you don't even get a ban message and will just be added to a shadow ban list so you end up wasting your time commenting but no one can see it.

There is a site that shows you your deleted comments[1] and its pretty shocking. I found out every single one of my comments in one sub had been auto deleted with the reason being I did not have any link karma (because I only comment). What a waste of time and it turns me off from the platform.

[1] www.reveddit.com


Yes and no. That's great for niches but not for general, high profile topics, a country subreddit for instance.


Disagree - most city subreddits are pretty hands off and they get overrun by people coming from alt-right subs who don't even live in the city talking about crime.

Moderation often improves quality. It can also be bad, but is not universally so.


Untrue... reddit pushes some subreddits to the front page and bans other subreddits on the basis of spreading "disinformation".


However the platform lacks namespaces.

It is impossible to compete with the first one that claims the obvious name for the subject, such as, say r/startrek

The competing ones also censor heavily, so all it does is that one can now choose what particular opinions one can't express.

Competing on the merits of moderation style is insignificant compared to having the most straightforward name that everyone will try first; the subreddit with such name will always be the largest.


No no no, first you need to find several admins. Otherwise you'll get banned along with your subreddit lmao


"Great content" === what are you referring to exactly? The left-wing propaganda that dominates the front page, passed off as "news" or "science"?


Offhand, /r/legaladvice, /r/AskHistorians, and /r/AskScience stand out as being heavily moderated to the benefit of overall content quality. I don't see how any of those would be political propaganda (left-wing or otherwise).


I have never tried the latter two, but the first one is a very good example of a notoriously bad advice subreddit that almost all lawyers dislike for spreading constant misinformation, and often even seeing people that say the correct thing be banned by the moderators.

There are almost no actual lawyers on that subreddit for two reasons: A) actual lawyers would like to see compensation for their expertise; B) in many jurisdictions, lawyers are not allowed to give legal advice without establishing a formal attorney–client relationship.


Interesting, this is the first I'm hearing about that. Typically users will state whether or not they're lawyers, and if so the degree to which they're familiar with the OP's jurisdiction, and threads will often be locked and/or littered with removed comments for the reason "bad advice". Often, the only advice users will receive is to go see a lawyer.

Based on all that, it seemed pretty solid to me. I'm not saying you're wrong, but if I had to guess it's probably more a case of being hit-or-miss than entirely bad. Even a minority of bad advice would stand out to a lawyer in the same way that >=1% of wrong information in a tech publication would stand out to most of us here, particularly if the victims of said bad advice were in serious situations.

(IANAL or an active user of /r/legaladvice. I just pop in every so often when an interesting thread hits my front page, so whatever I'm exposed to is presumably better quality and more actively moderated than what's average for the sub.)


The latter two subreddits really do have excellent content, they are worth checking out.

/r/legaladvice is so bad there is actually a subreddit dedicated to discussing what's posted there, /r/bestoflegaladvice. There's more attorneys in that sub than the actual legal advice sub, where a surprising amount of advice is either blatantly wrong or essentially "just tell the police everything, they'll help."

/r/audiophile and /r/wine are two other well moderated subreddits, where the discussion has remained relatively focused and high quality even as the communities have grown, because the moderators aggressively prune low-effort and unrelated content. There was big drama, once upon a time, when /r/audiophile banned anything headphone related.


I stopped going to /r/AskHistorians because often the only allowed answers were badly sourced.

This is especially prevalent on events that are still relatively recent (last 40-50 years).

Second, it really is /r/AskHistorians ONLY, you are not allowed to mention any personal anecdata (ie participating in the fall of Berlin Wall etc.)

Even worse you are not allowed to attempt to provide official primary sources.

There was a question on WW2 Soviet tank production and German preparedness. I was not allowed to link in secondary comment Hitler-Mannerheim talks where they talked about this very issue!

Mod reasoning: not enough context...


So join a different community on Reddit whose rules you like better, or make one. Why does everyone have to allow everything?


I think that once a sub attracts a large amount of readers, it's unethical to block certain conversations.

For large subs like /r/politics, a select few people get to control what's fed into the eyeballs of millions of people. And we don't even know what content wasn't allowed to be posted, how can we trust they're unbiased?


It's shown time and time again that the alternative, with a radical free-speech approach to content moderation, is that the community attracts users who post trash content (e.g., white nationalism, screwball conspiracy, glorification of violence, misogyny), who then gradually drive away everyone else. That's why they never attract much of a following and end up withering and dying.


As a former mod of /r/politics the reality is quite mundane. The bias you see on the front page of that subreddit is pretty much entirely due to the users upvoting. The mod team is really careful to ensure nothing gets removed simply due to political differences and is internally transparent, and includes mods from a wide spectrum of political beliefs and backgrounds. Interestingly the mod team is simultaneously accused of being biased towards left and right wing politics at once!


>I think that once a sub attracts a large amount of readers, it's unethical to block certain conversations.

Why? You're free to make your own subreddit with a different bias or no bias. No one really assumes it's unbiased and they go there because of the bias that they agree with. People like communities of like minded people.

By that logic any large scale media (tv, newspapers, magazines, etc.) should not be allowed to have bias even if the only reason people went there was for the bias.


This is quite true too. If you don't feel like politics is conservative enough for your tastes, worldnews is right there for you. And there are countless smaller communities catering to nearly any perspective imaginable.


Reddit pushes certain communities to the front page and suggests them to users, and quarantines/bans other communities.


I'm not going to shed any tears over them shutting down communities devoted to hate speech because the site has become noticeably more pleasant to read and participate in as those users went elsewhere.


The problem is winner-take-all effects and inherent difficulty of getting traffic to new subs.


I don't totally agree with this argument when applied to something like Twitter, but I can at least see the logic in it. When we're talking about something like a subreddit, I find the notion that you have to let anyone come and post whatever they want in your community because it's popular rather hard to swallow.


> Many subs have enormous blacklists of users,domains,and even words.

This is the most pathetic slam against mods I have ever seen and shows that you haven't spent two seconds considering what you're criticizing.


I get where you're coming from, as I created /r/relationship_advice. The existence of such blacklists, even "enormous" ones, isn't inherently bad; it's just a reality of dealing with an enormous flood of spam on a $0 budget.

That being said, blacklists on strings have 100% been abused by both the mods and the admins. Blacklists should be used for mitigating spam — e.g. we'll often block a specific attacker by blacklisting certain phrases or regexes and then deal with the inevitable edge case false positives by hand — not generally for censoring ideas or "offensive" words.

Sure, by all means block phrases like "kill yourself" in /r/SuicideWatch and maybe /r/relationship_advice. If a sub is inciting violence or posting CP, there's probably a case for banning it. But when people have to self-censor common curse words and even the word "fart" in a general forum like /r/tifu, clearly something is wrong.


> Sure, by all means block phrases like "kill yourself"

Even that will often lead to the scunthorpe problem.

I remember well once scouring through a post that was rejected on a forum to finally realize it was because it contained the phrase “tardive dyskinesia” which contained “tard” which alone was enough to deny the post, apparently.


FWIW, I think it's silly that reddit has banned the term "retard" site-wide, although I also acknowledge that the term may be becoming broader and more offensive than I personally understand it to be.

In the case of "kill yourself" in /r/SuicideWatch, though, it's such an extreme case with potentially disastrous results that I wouldn't find it particularly problematic. The occasional false positive is arguably a small price to pay.


> FWIW, I think it's silly that reddit has banned the term "retard" site-wide, although I also acknowledge that the term may be becoming broader and more offensive than I personally understand it to be.

Do you have any source of this? As far as I know Reddit bans no words site-wide, and I see the term used now and then.

On the very same website a post could not come through because I described someone as having “retarded pubertal development”, in any case.

> In the case of "kill yourself" in /r/SuicideWatch, though, it's such an extreme case with potentially disastrous results that I wouldn't find it particularly problematic. The occasional false positive is arguably a small price to pay.

It's also completely useless because people that want to get through the censor will et through it by other means that will still make their message easily understood.

Word censors are window politics and people that want to get around them will get around them and I assume it's already against the rules on that subreddit to encourage suicide so they would be quite willing to also break a rule to evade word censors.


Do you have any source of this?

Not offhand, but I've heard of users getting temporary suspensions for using the word, although I don't believe there's an automated rule site-wide.

It's also completely useless

Not entirely. Sure, if you're a dedicated attacker trying to tell a specific target to kill themselves, there's not much I can do about it; you'll acquire an aged account with decent karma and find a way to do it.

If you're just a regular person going about your day and say something stupid, an automod removal with a warning that circumventing the rule will result in a permanent ban is probably sufficient to make you give up and move on with your life. And of course bots that aren't specifically designed to circumvent my subreddit's rules would be trivially blocked.


> Not offhand, but I've heard of users getting temporary suspensions for using the word, although I don't believe there's an automated rule site-wide.

Merely for using the word? this seems like a myth to me. Various subreddits would obviously ban for it if used as a direct insult, but banning any use of it side-wide seems unlikely.

> If you're just a regular person going about your day and say something stupid, an automod removal with a warning that circumventing the rule will result in a permanent ban is probably sufficient to make you give up and move on with your life. And of course bots that aren't specifically designed to circumvent my subreddit's rules would be trivially blocked.

In which case it would be just as effective to simply give a warning based on detected words that various conduct with it will not be tolerated and that a moderator will be automatically informed of the post, and to remind users to make use that their usage of the word falls within the guidelines.

This would eliminate scuntorpe false positives, and have about the same effect of stopping the intended behavior.


Reddit has an “anti Evil Operations” team that has in the past acted arbitrarily against individual users, e.g. https://www.reddit.com/r/WatchRedditDie/comments/fkk6en/redd...

It does seem possible they may have banned an individual user for using that word.


I have a personal friend whose account was permanently banned (from reddit, not a particular sub) for using the word "retard".


Ironically, your comment itself also lacks any detail of countering or clarifying information, and is itself just a content-free slam on the person you replied to.

Is your point that the mods don't have all these blacklists (including of words), or that the mods do have the blacklists but they're allowed and expected to? Is the mod system working well, in your opinion?


> Is your point that the mods don't have all these blacklists (including of words),

Of course moderators have blacklists. For words / phrases / domains, they are site sanctioned through the (now) built in function called "Automoderator".

User blacklists are done through the built in ban function.

In fact, one would call these blacklists... moderation. Something a moderator would be expected to do.

> the mods do have the blacklists but they're allowed and expected to?

They are built into the site. It would logically follow that it is both allowed and expected.

Stepping back, I'm not sure how you can expect a forum, any forum, to survive without moderation.

> Is the mod system working well, in your opinion?

This is really impossible to answer.

From the bird's eye view, users and impressions are growing while reddit doesn't have to pay for moderation. A stunning success.

From a lurker perspective, they never interact with moderators and generally get content that has been reviewed and determined to be within the rules, though this may vary by subreddit.

From an active user perspective, the system may work well, or not well, depending on which subreddit(s) you frequent and how you use the platform. There are many subreddits, and moderators on some may certainly make your life unpleasant. So... don't be active on those subreddits.

However, the number of active users, according to the 90-9-1 principle is quite small, and the number of those that ever meaningfully interact with a moderator, or even a bot moderator, are probably a magnitude smaller than that.

So yes, IMO, the mod system seems to work well overall.


If you try moderating one you'll see quickly why these tools are necessary.


Funny enough paying for professional moderation will actually help Reddit. The quality of almost every major subreddit is trash due to mods on a power trip (or paid by a third party to push an agenda).


What makes you think reddit wouldn't just take the third party's money to push the agenda themselves, like many media companies do?


They would, but at least in that case they get paid for it.


The last thing Reddit wants is to take responsibility for unpopular but necessary moderation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: