Hacker News new | past | comments | ask | show | jobs | submit login
A clip from Stray got me banned from Twitter (fanbyte.com)
443 points by theshrike79 on July 23, 2022 | hide | past | favorite | 541 comments



A couple of months ago it was discovered that posting this little guy https://i.imgur.com/Bzo3Vae.jpg on Twitter would instantly earn you a 12 hour suspension for doxxing. Fuzzy image matching works in mysterious ways.

https://twitter.com/Raddagher/status/1533326201602707458


My Twitter account has been suspended multiple times, the last time for an obvious joke. As with the article author here, Twitter reviewed my appeal and denied it. https://lapcatsoftware.com/articles/twitter4.html

In the end I caved, giving Twitter my phone number and deleting the tweet, because I need my Twitter account for professional purposes. But now guess what, there was a data breach, and hackers are selling millions of Twitter user phone numbers and email addresses. https://restoreprivacy.com/twitter-vulnerability-exposes-5-m...


This is why you do not use a work/professional account for things other than work.


How do you judge what's work and what's not work? For many people, such as the self-employed, their online presence is part of their brand, and Twitter is professional networking.

This applies to both Craig and myself.


I would likily ask myself if this post is likily to effect my online presence and if the answer to that question is anything but a sound no then I wouldn't post it.

That said my online presence isn't part of my professional life. So my opinion probably isn't worth much.


To be honest, your reply came off as very condescending.

I don't know how anyone could predict that Twitter's crazy algorithm would flag that tweet, any more than they could predict that Twitter's crazy algorithm would flag a video of a cat game. It's all too easy to sit in judgment in hindsight, given the fact that Twitter's crazy algorithm did flag the tweet. But nobody really knows how it works with foresight.


> To be honest, your reply came off as very condescending.

Not my intention. You are right no one would have known it would be flagged.

My point was merely that if the tweet isn't 100% for your work / online profile then I would use a different account to post it.


The thing is, I actually created a separate business Twitter account when I started my business. But... nobody followed my business account! I even tried to get people to follow it, but they didn't. My personal account was already well established, people were already following that, and they seemed to have no interest in the business account. So I basically had to abandon the business account. It still exists but is little used and totally obscure. There would be no point in posting from it anymore.


It is very easy unless shitposting is your dayjob.


Your joking speculation that there may be a YouTube video on how to cut off your own leg was in fact correct. There is one from "the leg man" posted 4 years ago.


That is why you have separate business phone numbers and addresses that you should be ok with being public and are not actually used for anything serious security wise, so you don't care if it is 'breached', because it's already publicly listed.


I'm a self-employed solo software developer who doesn't even use a phone for business.

Your suggestion is that I buy a separate phone and permanently pay for phone service for that separate phone, just so that I can give Twitter an unimportant phone number in case they suspend my account for an insane reason and refuse my appeal, and then get hacked?


Or anything else for that matter, such as government forms, bank accounts, etc. The phone number doesn't have to be attached to a real device, it could be google voice or whatever else and you could expense it like a proper business expense, etc. In general it's a good idea to keep your business and personal accounts as seperate as possible. It's similar to the reason why you have separate business bank accounts, emails, domains, addresses, credit cards, etc.

Phone numbers are pretty much the super cookie anti-fraud ID of today. I don't consider them private anymore, but if your disciplined about keeping profiles separated it can definitely help scope the damage.


[flagged]


You're looking at HN as a conversation where someone is giving "you" unsolicited advice that you are taking as criticism instead of as a open forum where that comment is also generally useful to all the lurkers out there.


> that comment is also generally useful to all the lurkers out there.

Is it though?


Yes it is.

Ten years ago my employer asked me to make a GitHub account to contribute to the employer's open-source software on GitHub. I could've used my personal account, and plenty other employees did, but I figured it might be bad to give my employer control over it so I didn't.

Today the employer has tightened control to the GH accounts - they must now SSO to the company portal, which requires doing it from a company-enrolled machine running an attested OS and browser. It's perfectly reasonable for the company to have done this, but I would've been devastated if this had happened to my personal account.

Separating work accounts from personal accounts is always a good idea. Clearly at least one person in this thread did not know it, so there's no harm in telling people about it.


> Clearly at least one person in this thread did not know it, so there's no harm in telling people about it.

Heh, no. I am the person who started this thread, and the only other person in this thread other than those defending unsolicited advice.

I'm well aware of the possibility of separating personal and work accounts, as is almost every intelligent person. Whether they actually choose to do so or not depends on a lot of factors that are specific to their situation. "Good advice" in the abstract is not the same as the best choice for an individual at the time.

It's hard to say whether I was even involved in the Twitter data breach, because 5 million is only a small subset of Twitter users, and apparently they're refusing to notify the victims. Not sure how Twitter gets away with non-disclosure.


>> Clearly at least one person in this thread did not know it, so there's no harm in telling people about it.

> Heh, no. I am the person who started this thread, and the only other person in this thread other than those defending unsolicited advice.

Heh, yes. Ironic given the original topic was in part criticizing human mod decisions.

And I never ever thought after decades on public Internet forums that unsolicited advice was somehow disallowed or even generally frowned upon. This is sort of the social contract you sign up for when you post your stories in a comments section.


> And I never ever thought after decades on public Internet forums that unsolicited advice was somehow disallowed or even generally frowned upon.

Well, I'm glad that I've finally got you thinking about it now. It's important to recognize that many people do resent unsolicited advice, especially when it's obvious, condescending, and implies criticism of the person's choices, which is this case here.

I talked about a tweet of mine that was mistakenly flagged by Twitter, and how I had to give Twitter my phone # to unsuspend my account. One replier said "This is why you do not use a work/professional account for things other than work" and another replier said "That is why you have separate business phone numbers and addresses". How can these direct replies to me not be interpreted as personal criticism of me and my choices? The implication is that I made a mistake and wasn't following "best practice".

I actually have no regrets, either about writing the tweet or about not having a separate business phone number, and I continue to have no intention to set up a separate business phone #, address, etc., because that would a hassle for me with little or no benefit.

But again, this is not some secret, obscure thing that people need to be told. Of course people running a business are aware of the possibility of separating the business from the personal.

Not to mention, it's kind of hilarious to suggest Google Voice as a solution to one's privacy problems, as if adding Google into the mix isn't a privacy problem in itself. Google is the world's largest personal data collection agency.


> Well, I'm glad that I've finally got you thinking about it now.

Wow, on being condescending, takes one to know one I guess.


What are you going on about? He gave perfectly reasonable advice that many consider a best practice. Hacker News is not the place to act personally attacked and offended over such simple shit.


The commenter "novok" admitted in another reply, which was apparently flagged, that he knows me and has been following me on Twitter for years, and proceeded to give me even more unsolicited life advice. It was all very much directed at me personally.


[flagged]


> I'm just giving general advice about how it's good to keep things business and personal things separate in a public forum where others reading can also learn from the advice.

Literally nobody on Hacker News needs this advice, and in general, unsolicited advice, especially from strangers, is unwelcome.

If you have something like "here's a hidden Terminal defaults write command I discovered that is largely unknown and will solve your problem", that might be a different matter. But your advice is so obvious that it doesn't need to be said at all. In an ideal world where everyone had unlimited time, unlimited money, and no other constraints, everyone would always follow the "good advice", but in a non-ideal world there are many tradeoffs to consider, and if you're talking to a stranger, you don't know their personal tradeoffs. Of course I've thought about these things, as any intelligent person has, and made my choices based on my circumstances.

There's a level of conceit in thinking that other intelligent adults need advice like this. You seem to believe you're being helpful, but effectively it's insulting the intelligence of others.


>Literally nobody on Hacker News needs this advice, and in general, unsolicited advice, especially from strangers, is unwelcome.

>There's a level of conceit in thinking that other intelligent adults need advice like this.

The only conceit being displayed here is from the person who thinks they can speak for other people on what they do or don't need to know about separating accounts, especially after they themselves suffered from not separating them.


> especially after they themselves suffered from not separating them

I didn't say I suffered. The odds are that I was not involved in the data breach, given that it was only a small percentage of Twitter users.


You seem slightly confused. This is a public forum not a phone call. If you need some level of control over how people reply, talk to a carefully curated group of friends who will comply with your whims.

This thread started off interesting and turned embarrassing.


jmp.chat will give you a number for $3/month. Voip.ms is cheaper, but I don't know how well it works with SMS short codes.


I don't know about Twitter but many of these phone-number-for-verification schemes outright reject VOIP numbers.


@a2_4am got banned from Twitter for linking to a video on the Internet Archive. The video is of an Apple 2 computer booting up an old program. https://mobile.twitter.com/textfiles/status/1536378776094908... He ended up fighting for days and the appeal was never granted, even after someone on that thread said they escalated internally. He ended up moving to Mastodon, @a2_4am@mastodon.social


This is a good response. If twitter shows they don't care about you enough to make careful moderation decisions, then leave them with their shitty toys and dark patterns. Saves me a visit to nitter.net.


Facebook has been using ML to detect “revenge porn” images since at least 2018.

Likely they don’t know any more than you or intone else does why their ML classifies your image that way. In fact, one of my pet peeves about ML is that most of the time no one can meaningfully explain why it made any particular decision (they can explain inputs and weights and the algorithms used and matrix math, etc, but can’t say “the model was 93% sure that part of the image was a penis”).

Anywhow if an actual human did look at your first appeal, you did not do yourself any favors calling them “dense”. People don’t like it when you call them stupid[1].

[1] https://www.theonion.com/people-dont-like-it-when-you-call-t...


You're not wrong about the "dense" thing, but it sure highlights the need to keep examining and reexamining the relationship between services like Twitter and e.g. censorship, governance, and the First Amendment. Nobody's breaking the law here, nobody's being unreasonable, I think, but this isn't a great balance of power.


This is far from the main discussion, but I used to be a "stinker" like the author describes, and I have a touch of unsolicited advice. Life is much happier if you save your obstinate streak for very limited, important occasions. Otherwise, you live your life ensnared in the negativity of trivial violations of your personal expectations about values and behavior.


Alternatively, we need people--nay... heroes--such as this, or bullies like Twitter get to run roughshod over everyone.


Obstinacy does not guarantee success. My advice is a variation of "pick your battles".

It's ok to have battles, just don't make them a big enough part of your life that it becomes your personality, and don't engage without proper forethought.


Companies play by the rule "better sorry than safe" but force their users to play by "better safe than sorry".

As others have mentioned, they main issue is not the autobanning (it sure is a false positive, those things inevitable happens on mass-scale products) but the appeal rejection that fails to detect it.


But they aren't sorry either. This guy appealed, and if a human at Twitter ever saw it, it was probably some overworked contractor who gets a strike for every appeal he lets through.


If this gets enough traction they will be on a few days (or at least they will revert the ban). It's sad.


Yep-- if the appeal hit a human and got a quick "oops" sorry! that would be another matter entirely.

I wonder if they're using their obviously invalid "appeals" results to train the AI? ... so as time goes on it just gets dumber and dumber.


> You cannot possibly be this dense.

Who is the dense one here? You just insulted a person who has to look at a million of these, and had absolutely nothing to do with the misclassification, and then you’re surprised when they rejected your appeal. Remember there are actual people reading your words (at least for now)


Yeah, heaven forbid you follow you company's policies as outlined for the appeal process, better to power-trip because someone who was banned from the platform, the harshest punishment your company has, is fully-expectedly-vocal about being mistreated.


Idk, this feels like a bit of a double standard. Either being mistreated justifies bad behavior, or it doesn't. If it justifies insulting a customer service rep who didn't personally ban you in the first place, then that rep upholding your ban due to being unnecessarily aggressive is justified too.


Twitter moderators are acting in a professional capacity, representing Twitter. In my mind, there really is a (justifiable) double standard.


This is a great illustration of why appeals processes are often not taken very seriously. Generally they're used to re-litigate the issue by people who do not even consider the possibility that they may have violated policies. The insults are just the icing on the cake.

Basically, the real people involved know that appeals processes are going to be subject to vast quantities of what amounts to spam.


“You” is Twitter


Exactly. I can’t believe so many commentators here (and maybe the employee who reviewed the request, maybe not) missed this simple point and turn the heat on the author instead. A shame that English doesn’t distinguish between the singular and plural forms of “you” I guess?


It seems AI is quickly becoming capable of automating the worst human systems, vs. offering amazing solutions to humanities. In this case, it seems AI has successfully mimicked the horrible bureaucratic systems, operated entirely by humans, that leave you feeling equally helpless (think the film “Brazil”), only these fancy technical systems are cheaper, maybe? I suppose it should be no surprise that when we tried to imitate humans we got as bad end results as when humans were running things.


This is standard. Reddit's started doing it too recently. Totally random stuff, instabanned. Reddit threatened me with a suspension for threatening violence over a polite and entirely non-violent post.

Meanwhile, I report actual revenge porn, and they reply that it doesn't violate their content guidelines. (and I mean completely unambiguous with a posters gloating about how it shows up when googling the victims name and that her classmates and future employers would see it)

I went through months a while back trying to get twitter to remove an impersonator claiming to be me (and using my photographs, etc.) who's MO was to start conversations with friends/family/professional comments and begin cussing them out. Didn't want to do anything about it.


There's a lot of perverse incentives, people aren't punished for being trigger happy.

I remember in a subreddit, there was a discussion about Facebook "top posters". I replied it wasn't exactly news, and wrote about the "top posters" to the letters to the editor of a local newspaper in my childhood. One was a famous TV meteorologist who would always write rebuttals to young earth creationists, from a Christian perspective. One guy was leader of a tiny racist party who would write about how Yugoslav refugees would take over our country. Others still were just old cranks who had an opinion on everything.

The guy who owned that subreddit banned me over "doxing". I tried arguing, hey, they were public figures in their day, writing under full names (and location!), and they're long dead now anyway, but he either didn't want to admit he was wrong or was just terrified of losing the subreddit to the admins.


> Here’s the thing: I’m a stinker. Even at my jolliest, if someone figuratively shoulder-checks me, I’m probably going to make so many mountains out of this molehill that I will adjust the topography of the United States significantly.

I’m sorry, but this is your real problem, not Twitter banning you. This is called “pride” and is historically recognized as one of the largest “problem creators” as far as emotions go.

The author writes like this is some kind of noble trait, but is in reality very probably the source of many of the author’s negative experiences.

There is a clear path to getting unbanned from Twitter, but the author refuses to take it. Shame.


Ideally people would just quietly stop using Twitter and services like it en masse, but that's not going to happen, so it's reasonable for people to complain about mistreatment. And that's what this is: the author got banned for their post violating a 'revenge porn' rule when that's not even close to what they posted.

Twitter's content moderation didn't work here. It banned someone incorrectly. That person used the appeal process and was denied, even though the content in question unequivocally does not violate the cited rule.

Twitter has no obligation to host anyone's content, and they are within their rights to deny service to someone whenever and for just about whatever reason they want. But this is how you're supposed to handle it! If you've been mistreated by Twitter, you say something, people see it, and they take it into consideration when thinking whether they'd like to start/continue using Twitter.


This guy is upset that the free service he uses employs imperfect AI and it affected him personally. I agree we shouldn't be using AI for everything and we might even include humans after an AI makes a call just to verify the call. However, that has a cost and this guy isn't paying. I'd guess this guy wouldn't pay, because having to deal with occasional AI errors is part of the cost and he clearly doesn't want to pay.

So he appealed. Sure. Here's his appeal:

> This is a video from the PlayStation5 game Stray. You cannot possible be this dense.

Verbally abusing reps like this is not acceptable and certainly does nothing to make me sympathetic to this guy. Who would want to help this guy? Even when I was a minimum wage employee at a fast food chain policy was people verbally abusing employees didn't get our food.

So, the appeal was denied.

Now he's going on the warpath because he's 'right'. He's not right. He was right that the ban was nonsense and should not have happened. He stopped being right with his verbal abuse.

His idea to win them over? By fucking himself over. He has 57K followers. That's not nothing to him but in the grand scheme it's essentially nothing to Twitter. He's a supplicant but he thinks he's king. He has no leverage here and rather than being nice he's saying he's going to be more of a jerk. I won't even wish him good luck.


I always find it funny when people get up in arms about "verbal abuse" and somehow don't care about the abuse inherent in the situation. Somehow the perpetuation of the abuse is justified by someone having an imperfect response, dear god. But i suppose you view him as an uppity supplicant who offended the king. Therefore he's a jerk and deserves it, according to you? This is not a view of things that actually solves the problem long term. The "jerk" is being far more productive than you, he is far above you.

When this happens to you, you write a 'polite' reply and get ignored all the same you probably won't even notice because you're too busy calling people jerks for getting banned by AI.


Two wrongs don't make a right.

Twitter banning him for "Revenge Porn" over a video of Stray is really not a great look, and the ban should've been immediately reversed on appeal. Him being abusive to the moderation team (who likelier than not have nothing to do with the team that built the AI moderation system) is completely uncalled for, though, and deserves punishment

Funnily enough, what I think is the right outcome here would come across as incredibly petty: Appeals process is successful, and the ban over literal cat video is reversed; however, there is now a new ban due to abuse of the moderation team, which will have its own appeals process.


Agree on your proposed outcome.

If you have done any sort of social media moderation, you will very quickly develop an immediate reaction to even a trace of abuse or hostility from the users. There are so, so many assholes on the Internet, and you can't fix them. The more effort you put in to one incident, the more of your time these assholes will take up.

The CS rep handling this particular case has dozens of others to handle that same hour. They don't care about you, the user, because you are not a customer, you are the product, and your are easily replaceable. The CS rep's boss also doesn't care about you.


> They don't care about you, the user, because you are not a customer, you are the product, and your are easily replaceable. The CS rep's boss also doesn't care about you.

And this right here is the root of the problem. The problem isn't that customers are "abusive", it's that CEOs and stock holders can offload the fruits of their own abuse onto people who are powerless to change anything.

It's not exactly a surprise that that breeds resentment from both sides.


> Him being abusive to the moderation team

His comment was angry, but not "abusive".

> (who likelier than not have nothing to do with the team that built the AI moderation system)

No: they do not get some free pass here from the moral implications of this system they have chosen to be a part of. I have at lease some sympathy for the idea that some of these people might really really have needed a job or have gotten caught up in a system they didn't realize was evil until it is too late... but, now they work for the system and the power differential is clearly skewed in their direction with them being the judicial arm of this ridiculously broken enforcement mechanism created by Twitter: they have power over this user that this user will never have over them, and so the onus of being reasonable is entirely on them.

> is completely uncalled for, though

No: it was quite directly "called for". Twitter did something ridiculous here, and they frankly at this point are doing it knowingly, and the people who work for Twitter are actively deciding to be part of Twitter's process.

If you go around as a bully kicking people in the shin because "no one said I can't" even though you know you are being a complete jerk about it, and then one of the people you kick has the gall to get angry, that anger should be expected and I'd even go so far to say it is justified.

> and deserves punishment

Even if this were true--and it is not--we should then be faced with the reality that that is a separate issue; and so, if Twitter, wanted to be honest here they should accept the appeal and then separately reinstate the user's account. However, remember: if we believe that being angry at someone who wronged you is somehow wrong (which again: it is not) and we believe (due to your insistence) that this potential customer service agent (we don't actually have proof there was a person who looked at the appeal) was horribly wronged by the user (even though they weren't), then the action of punishing the user by lying about the result of their appeal for sending a slightly-frustrated response is yet another situation where we should claim that the person wronged should turn the other cheek, no? So your own logic actually also comes to the conclusion this customer support person should be punished... I might personally suggest--for the sake of irony--being immediately fired and then judging their amount of severance pay based on how friendly they are to the HR representative (who, obviously, despite being the arm of this decision will be cleared of culpability due to having merely chosen to be part of a labyrinthine bureaucracy instead of directly taking the action) after they are given the news.

Look: I used to actively do customer service. And did I get some people who were angry? Yup. Did it sometimes piss me off? Of course! Hell: I will even admit to occasionally getting into arguments with users that scaled on how annoying they were being.

But, at the end of the day, I didn't let that decide whether I was going to give someone the refund they asked for or to fix their account in whatever way was needed. It might have limited the "extra" friendly help or random benefits I sometimes threw people I would directly interact with, but if they were legitimately wronged they deserved legitimate compensation and their anger at me had to be contextualized by its cause: that my software had wronged them. If anything, it felt a bit good to just hit the refund button and then ignore them forever.


You seem really hung up on only one side being right or wrong. He was in the right. He then did something wrong. That doesn't justify what Twitter did, but it doesn't ingratiate me to his side either.

Yes, I generally treat people with politeness and consider it a mistake when I let my emotions get the better of me. Perhaps not surprisingly, I've usually only had good experiences with most businesses when there have been mistakes, even mistakes I felt were egregious and should not have happened. That's not always the case, but in those cases that's the last business they usually get from me.

When I say this guy is a supplicant, it's because he has no leverage to get what he wants. It's obvious Twitter doesn't care about him. The other stories in these threads make it obvious they don't care about most of their users or at least about fairness. That's what he's subjected himself to and this result shouldn't really be a surprise.

I use HN and it's free to me. I could get banned.. I'd have some questions about my ban; but I also know dang to be pretty damn fair. I respect him and I think he respects others. There's no scenario where I'd talk to dang like this guy talked to Twitter. If dang acted like Twitter, I would either not be here, or I wouldn't be at all surprised when I was treated like I didn't matter.


> You cannot possible be this dense.

This is your idea of verbal abuse? Obviously this is subjective, but I would laugh at you if you said this to me - like literally laugh in your face.

No, at some point you have to say the problem is being overly sensitive towards anyone complaining about your fuckup. The guy is right, and they fucked up - he shouldn't have to bend over backwards because they're fucking babies. Maybe they should try to have some accountability and integrity and try to fix the problem rather than get all pissy about someone who's being civil (the problem is nobody can agree on what's civil, and I understand this is not an easy problem). From my perspective, you are the problem here. And that's the thing, everybody sees it differently.


Right... do you yell at waiters and phone support people too? You're doing that to someone who has absolutely nothing to do with the problem you are having, is probably often harrassed by people like you, and who are at literally the lowest rung on the ladder. You're making people's day worse for absolutely no reason except that you can't make yourself feel better without being obnoxious to someone else.


This has nothing to do with making people feel better. It's about holding twitter accountable to their own policies. What do you suggest I do if not use their own support system? The next step is to blast social media.

"You cannot possibly be this dense" is referring to whoever is making the decision. If the support team has the ability to overturn this decision, and they maintain the decision, then they are indeed the problem and deserving of far more abuse than that. If they are unable to change the decision then why do they care if I insult their company policy, especially since they hopefully have the autonomy to recognize the stupidity of it? They should be able to laugh about how dumb it is too.

You are asking me to be civil with twitter when they are not being civil themselves. I have no respect for hypocrites.


Support systems have absolutely nothing to do with holding anyone accountable ever, and I don't know why you think they would.

> "You cannot possibly be this dense" is referring to whoever is making the decision.

Is this random guy making 8.50 supposed to read your mind? The person reading your brilliant commentary is not going to send it upwards for you. Literally the only person reading it is the guy you sent it to.

You're acting like the random support guy is an idiot, and when you do that he's going to also assume you're an idiot, because you're acting like one when you send a message to one guy and say "you're dense" while actually referring to some third party.


> You're acting like the random support guy is an idiot

It's exactly the opposite. I'm treating him with enough respect to think he can think for himself. You're assuming he's an idiot and needs to be coddled. You're actually being condescending even if you mean well.

> Is this random guy making 8.50

Why does his pay matter? It feels like an assumption that he's dumber because he's not well paid.

> supposed to read your mind

No, he's supposed to read my words. If I believe a peer could understand me easily, why do I need to dumb down what I say for him?

I've worked retail before. There are people that are just assholes and there are people with legitimate complaints. I laugh at the assholes and they make my day better, not worse. It's the people with legitimate complaints that I try to actually help.


>Support systems have absolutely nothing to do with holding anyone accountable ever

Incorrect. Where I have worked the logs of support system tickets are systematically used to hold people accountable. 3x spike of networking support tickets? Someone(s) at some level are going to have to explain what's going on and be accountable for fixing the situation. Same for non-IT customer support systems.

Support systems are very routinely used in this way, it is part of the basic purpose.


It’s rude towards Twitter. This is not a conversation, it’s a box in a form. One of many copies of the same form they’re churning through. I don’t understand why a CS rep / moderator reviewing appeals would take it as a personal slight, because that would require personally identifying with Twitter and its algorithms.

Do you think people should be banned for being snippy in a customer satisfaction survey too? Product reviews? A human reads it. A human that should know well enough that they’re not the subject of any negativity therein.

The idea that this is a CS rep’s petty revenge is as much of an unsubstantiated fantasy as “they’ll tell the cook to spit in your burger”. It’s far more likely that either the human reviewer(s) is doing a poor job, or there’s something else broken about the appeal process, considering all the other comments referencing nonsensical failed appeals with no evidence of “rudeness”. Like false positives in appeals going uncounted because the appeal decision is itself taken as the source of truth, and reps being incentivized to churn through appeals quick to save on labor costs.


The support person is almost certainly incentivized to get through as many appeals as possible. He's also probably paid very little. He has basically no incentive to help this guy and this guy gave him a disincentive. There's also likely correlation between people who speak this way to others and people whose bans were legitimate. Next.

Spitting in a burger is active malice that can get you in legal trouble and I've still seen it happen. (For the record, I didn't let the burger go out and the guy got fired.) There was also a case where a local restaurant cook actively poisoned people he didn't like.

Choosing not to help someone when you could is passive and it happens all the time. The worst consequence is you lose your job. One of the few freedoms of low paying jobs is that replacing those jobs is very easy. If you don't believe these things happen, start talking to retail and support employees. Or you can find plenty of forums where people talk about this type of thing.

Do I think employees need to recognize that they are representing their company and no one is saying these things to them personally? Yes. Do I think people need to stop talking to "companies" this way? Also yes, even if only because they're more likely to get what they want that way.


Yes, it was rude. And if Twitter wants to ban someone for being rude, that's their prerogative and I support it.

I also support the rude person relating how they were banned for violating a rule they don't think they could possibly have violated, and I can look at it and say "Wow this person was rude but that was yet another instance of Twitter being stupid. I probably wouldn't like this person. I'm also not interested in using Twitter."

Also: while we'll never know for sure, do you really think the appeal would have been successful if he instead wrote "Hello Twitter, could you take another look at this? I posted a 5 second clip from a new video game about a stray cat, and was banned for violating your revenge porn rule. Can you help me understand how the rule applies to this clip? Sincerely, User"? Because I sure don't.


I think the chances were slim. I think he reduced those chances significantly by being rude. Assuming someone who can help sees his blog post, I think he's also reduced his chances that they'll want to spend their time helping him.


I wouldn't characterize "You can't possibly be this dense" as verbal abuse. Twitter loses credibility every time they enforce their own policies inconsistently. OP's incredulity that an actual human could look at that video and think it was revenge porn seems well-placed. The points about whether OP has paid Twitter money seem irrelevant; this is about Twitter enforcing their own policies correctly, not about some kind of standard of paid customer service.


I don't have a Twitter account and I never will at this rate, since all I ever hear about Twitter is what a shithole it is. Twitter isn't showing me ads and I'm not posting things that could draw more users to Twitter to see more ads. That seems like a real cost to me


There's some selection bias there, though. "I had a nice conversation with some distant friends and discovered an interesting person" isn't newsworthy.


I think what you're missing here is that the author is not looking for your sympathy, nor Twitter's sympathy. There's a larger point the author wishes to make about Twitter:

> Somehow they can do this, but racial slurs are generally pretty okay, white supremacy is largely ignored, and every Twitter troll I’ve reported for harassment gets a three-week investigation and then a “We find they did not violate our rules” email.

Note also that the author's goal is not getting unsuspended by Twitter, but rather this:

> I’ll not tweet for months if it gets someone to write “We as a company think the cat is revenge porn” in an email.


I think you're being overly generous here.

He doesn't offer examples of the harassment he reported. Even if he had, he's comparing Twitter AI-flagged revenge porn posts against user-reported harassment posts. The former is pretty clear cut and Twitter flagged it themselves. The latter may be a judgment call and was probably reported by the alleged target. His rudeness undercuts any such argument anyway, since we can't be sure the rep who reviewed his case didn't just see this and not waste his time on it.

His main point seems to be "Twitter sucks." I agree. It sucks. So I don't use it.


> Even if he had, he's comparing Twitter AI-flagged revenge porn posts against user-reported harassment posts. The former is pretty clear cut and Twitter flagged it themselves. The latter may be a judgment call and was probably reported by the alleged target.

I would consider humans manually reporting tweets to be much more reliable than AI flagging, especially given the bizarre false positive results we've seen from Twitter AI flagging.

I've personally reported tweets, with the same kind of non-results. Mostly bots though rather than harassment.

> His main point seems to be "Twitter sucks." I agree. It sucks. So I don't use it.

Well, the author isn't using it right now either, intentionally.


> I would consider humans manually reporting tweets to be much more reliable than AI flagging, especially given the bizarre false positive results we've seen from Twitter AI flagging.

Sure. However, in this case Twitter is the one with the AI and the one deciding whether to trust a source. The case for child porn is virtually binary and even where it isn't, they're going to lean on the side of not allowing something. The case for harassment is reported by unknown humans who are probably personally involved, and whether something is or is not harassment is not always black and white. From Twitter's point of view, it probably makes more sense to trust their own AI.

Here, the author hasn't provided specific examples of alleged harassment, so readers can't even make a judgment call.


> This guy is upset that the free service he uses

Twitter is not free. It's just that the payment is not money, but personal data, generating content that Twitter can use, and being a viewer of ads.


>He stopped being right with his verbal abuse.

No. He did not stop being right about a mistake, and its reasonable remedy, by making an unrelated mistake himself. You might argue that the reasonable remedy to his abuse is also to receive a ban, but that is a different argument. It should also be logged as a separate disciplinary actions, not just because a petty Twitter employee got their nose tweaked by a frustrated user that had been mistreated with false accusations.


I don't think we disagree here. He's right about the ban, but he's made the situation more complex and he's not right about the whole situation.


I find it really funny so many people are up in arms against this perceived “verbal abuse”. The “you” he filled out in the appeal form clearly refers to Twitter as a platform. Maybe it would be clearer if English actually distinguished between the plural form and the singular form of “you”, like in various other languages? Perceiving this as a personal insult is exactly a “pride” issue of the Twitter employee and some of the commentators here, not the author.


We have to acknowledge that banning is weaponized by platforms now too.

Sure there are assholes out there, but I think that banning people for a first offense is also an asshole move. People are banned more frequently by moderators and bots/scripts now, and even manually, by people who have their own in-grained bias.

The modern process of banning people is often arbitrary, and usually after years of work within an ecosystem or platform that is often also tied into many other things like authentication in other sites and apps. A ban can devastate a person's reputation or business, it should not just be considered as "It's their platform, they can do whatever they want" when the platform involves a very public social aspect of operations.

We should consider banning people differently than we used to. Proper provision of support for platform users also should regularly be mandatory in cases like this... It's not fair to just tell a banned user they can fill out forms and wait for a response when their reputation, time invested, and business goals suffers in silence.

Take for example that Wal Mart allows food trucks to set up in a massive (Wal Mart Owned) warehouse to sell their goods... A vendor parks inside and sets up their truck... The vendor can have one single argument with a warehouse manager, a customer, or even make a mistake in advertising, and if they are banned from the Wal Mart warehouse (by today's standards... figuratively of course) the vendor's truck, all of their possessions left within the warehouse get revoked by Wal Mart, and the vendor must go and rebuild their business somewhere else. This also allows competitors and nefarious managers to capitalize upon the vendor's reclaimed possessions within the Wal Mart warehouse from that point forward.

The only way to ensure that people's money and time investments within a workspace are stable is to have rule transparency, proper and easy user support, and maybe even a "3 strike" rule. None of those things are happening on platforms right now, and it's creating very disgruntled users.

That being said though, the "dense" comment is not the best way to go about getting a positive resolution to a technical support problem.


>This is called “pride” and is historically recognized as one of the largest “problem creators” as far as emotions go

yep life sure is easy if you don't have a spine and accomodate to every little act of bullying thrown at you by modern society. It's not the guy who is wrong, it's the world that is wrong, it's good that he understands it.

The author refuses to take the 'clear path' because that isn't an actual solution to the problem of faceless organisations taking people's abilit to speak away, and he probably recognizes as someone who has an audience it's responsible to sa that out loud.


There's a great middle ground "being spineless" and "getting into constant altercations other nothing".

For me it's sometimes a pretty hard space to inhabit. It's easier to fly off the handle or seize up and say nothing. But it's much more rewarding, and for me at least has gotten easier as I've gotten older.


Wow, the author got falsely accused of posting revenge porn, a criminal act by the way, and your answer is take it? Why are you so intent on attacking the author’s character for simply defending himself?


Twitter's auto moderation screwing up on your cat game video is not being 'falsely accused of posting revenge porn, a criminal act'.


Excuse me? That is exactly what Twitter accused the author of doing. The policy which Twitter referred to according to the article has the words “Non-consensual nudity policy” written in huge letters [1]. Do those words mean something other than revenge porn?

[1]: https://help.twitter.com/en/rules-and-policies/intimate-medi...


They sent him an automated private message misidentifying one of his videos. This is obviously lame and annoying but no, he was not accused of a crime he did not commit or have to escape from a military stockade into the Los Angeles underground.


The flagged post was reviewed by two humans. I think you're overemphasizing the 'automated' aspect here, which is just an implementation detail AFAICT. Posting revenge porn is a crime, though I agree Twitter doesn't seem to be calling the cops. If they did, I don't think the evidence would be compelling enough to even bring charges, much less a conviction. But the account stays suspended regardless.


> The flagged post was reviewed by two humans.

How do you know? I wouldn't be surprised in the appeals are automated too, especially the initial ones.


Having ML handle appeals strikes me as implausible, but possible.


I guess the question is "is it Twitter's goal to have the appeals process work, or just exist?" If they have some dumb automation that just rejects most initial attempts, it could discourage their users to the point where they can employ a much smaller staff to handle the appeals. Especially if the sanction can be solved by the user just deleting the content.


I'll admit that my initial comment was somewhat biased coming from another thread in this discussion discussing verbal abuse of the moderation team. In that context, my assumption was that they were human, naturally. Your point is fair, though, especially considering Twitter's scale.


They didn't file a police report, no, but this is absolutely the equivalent of verbally accusing someone of posting revenge porn.

If someone walked up to you and said "Your most-recent tweet is revenge porn and it's disgusting" and then reiterated that after double checking both the innocuous tweet and that they had the correct person, you would probably be claiming they incorrectly accused you of tweeting revenge porn (i.e. a crime). Because that's precisely what they just did. That Twitter is letting a machine they built do that for them does not remove their responsibility around the act, nor make it somehow not an accusation.

If anything it seems safe to claim that it's more than an accusation. They're acting on it, doling out (minor) punishment along with the accusation.


When the security person at Best Buy stops you to check if your receipt matches what's in your cart when you head out, they're not accusing you of theft.


That's not even remotely what happened here though.

When they stop you, say "you stole that", ignore that your receipt says you paid, kick you out of the store, and stop you from coming back, they are pretty clearly accusing you of theft.

That's a straightforward translation of what Twitter did.


Whatever 'translation' you choose, none of it comes anywhere close to accusation of a crime. It's pretty plain from reading the message and the policy. You have to lean pretty far into bombast to turn it into that. Which, if that's how you choose, to read it, fair enough but it has no basis in any of Twitter's actual communication with this user.


No?

If I say "you distribute revenge porn", I'm accusing you of a crime. There's no ambiguity there. Revenge porn is a criminal act nearly everywhere. In some states and some situations, it's even a felony, which is a significant step up in severity.

I haven't informed your local authorities that you do this. But that doesn't have any affect on whether or not I am accusing you.

>An accusation is informally stating that a person has committed an illegal or immoral act. - https://www.law.cornell.edu/wex/accusation

Twitter has unambiguously made the claim that they are tweeting revenge porn. Twitter has clearly stated that doing so is the reason their account is banned. They're accusing them of a crime.

It's not bombastic or stretching, it's the literal description of what Twitter (via their bot and their review process) is doing.


If I say "you distribute revenge porn"

Twitter doesn't say that, it's, again, pretty clearly spelled out in the policy. The leap from 'our system thinks this looks like revenge porn' to 'you are accused of a crime' doesn't seem to be based on anything beyond umbrage and repetition. Neither of these things make it true, though.


If the system went from "our system thinks this looks like revenge porn" to "we are sending you a warning in case you are, verify that you are not doing it or change your actions immediately. you may be banned on review if it appears you did" I think you might be right.

But that's not happening. It goes from "our system thinks this looks like revenge porn" to "you are banned". And even "we double checked, you are banned". If Best Buy said "your actions seem similar to thieves" and kicked you out and banned you while quoting theft laws, without checking if you had stolen something, yes. I would say they were accusing you of theft, because they are acting as if you did. Actions can accuse as well.


The account was suspended and, again, the policy, which you should read, makes it pretty clear that you can be suspended for Twitter being wrong about your post. It takes a lot of implicit extrapolations ('revenge porn', a term they don't use, 'banned', a thing they don't actually do in those cases, etc) to get to 'accused of a crime'. And they aren't in the thing.


Using a synonym isn't an "implicit extrapolation."

> 'revenge porn', a term they don't use

Again, "non-consentual nudity" means revenge porn.

https://help.twitter.com/en/rules-and-policies/intimate-medi...

> 'banned', a thing they don't actually do in those cases

They "suspended" the account and haven't reversed it. Basically the same thing.

Please, repeating a claim over and over doesn't make it any more true.


So if I send you an automated message saying that you have done something illegal, you’d be fine with that? That’s not accusing you of a crime? That’s absurd.


'crime', 'illegal', etc, are rhetorical escalations you brought into this. Twitter suspends accounts that violate its policy against "intimate photos or videos of someone that were produced or distributed without their consent". The policy includes content that appears to be that but might not be, content posted inadvertently or with the intent to report, etc. There's no mention of crime and of course, there's no easy way to know whether a particular post would actually be criminal and in what jurisdictions. Those are things you just declared, Twitter doesn't.


That’s a lot of words spinning a factual event into a “rhetorical escalation.” The fact is plain and simple, Twitter accused the author of posting revenge porn. Posting revenge porn is, despite your objections, considered a crime. Therefore, Twitter accused the author of a crime.


As a fly on the wall, I think you’re both right! Both of your points are good, and aren’t mutually exclusive at all.

TikTok sends automated messages all the time accusing people of ridiculous things (mostly during livestreams). I bet if you were subjected to that on a regular basis, you might feel a little differently.


Uhhh, the author did nothing wrong and broke no rules (and yet as pointed out rampant, violent hate speech lives without many problems on social media) . Someone getting slightly and innocently sassy after getting banned for posting a cat video is the problem the author is writing about. They're not trying to get a data with you or something.


What is the clear path? According to the article he properly followed all of Twitters procedures by appealing TWICE.


> I’m sorry, but this is your real problem, not Twitter banning you.

I’m sorry, but no it’s not.

This is called treating the cause instead of the symptom. More people should do it.

The alternative is to acquiesce to the false moral superiority of those in control.


> This is called "pride"

Yes. It's having some pride and not groveling to be allowed back on their platform.


I don’t see the “clear path” here. As many others explained, deleting the video would both be wrong and potentially problematic.

I find it really funny so many people are up in arms against this perceived “verbal abuse” and giving the author a lot of flake for this one single word. The “you” he filled out in the appeal form clearly refers to Twitter as a platform. Maybe it would be clearer if English actually distinguished between the plural form and the singular form of “you”, like in various other languages? Perceiving this as a personal insult is exactly a “pride” issue of the Twitter employee and some of the commentators here, not the author, who IMHO acted perfectly reasonably.


Putting up with the demands may be the easy way out for the individual in cases like these but I am thankful that there are people like the OP who stand up against it so that we hopefully won't have to put up with worse shit in the future. Abuse only ever stops when there is pushback.


What is the clear path?


Admitting that the tweet is revenge porn by closing the review request and deleting the tweet.

I mean, in this case, I’d likely keep forcing the review too. This is clearly just a bad detection


Will this not cause more issues?

They will very likely use these appeals to help train the content AI.

You're rewardimg it for a wrong decision that it will use for future decisions.


That's literally admitting to a crime. Even if the crime didn't actually happen, the tweet is gone, so now he's left with having admitted to a crime and no evidence to show that he didn't actually commit it.


It's not literally admitting to a crime. If I said that if you reply to this comment, you stole my wallet and you reply, you're not admitting you stole my wallet.


Is it a stretch to assume Twitter will assume he admitted to the crime though? I’m pretty sure in a en somewhere the author will be flagged as « this account posted confirmed revenge porn in 2022 ». What if their bots think another tweet of his is revenge porn? Will they ban him forever with no appeal? What if they decide to report it to the authorities? What if Twitter decides to show this info on profile pages to warn the users of potentially offensive users? What if there’s a leak and the author becomes publicly flagged as someone who posts revenge porn?

Honestly this is scary as fuck.


It is a gigantic stretch, yes. If twitter thinks you posted 'confirmed revenge porn' they'd ban your account (as stated in their policy on the subject) and probably report you to law enforcement. If that happened over a cat game video then yes, that would be bad, disturbing, scary, you name it. That's a 100% not what happened, though, and turning it into some narrative about implacable machines grinding down our inalienable human rights is a mistake.


So why don’t they unban him without the need to delete the tweet?


I don't know, they're fucked up and have fucked up? But that's the whole point, twitter fucking up is not an accusation of a crime. It's twitter fucking up.


The clear path is... lying about his content? If that's your definition of a system working as intended, we have a problem.


"you cannot be unsuspended until either they agree your appeal is correct (which it is), you give up your appeal (never), and/or delete the offending tweet."

Delete the offending tweet.


The author doesn't mention this in the article, but I think deleting the tweet requires you to acknowledge that it broke the rules:

https://twitter.com/Raddagher/status/1533326201602707458/pho...

Which in this case I think would be a lie. Yes I agree the author should be less prideful and not call Twitter employees "dense" in an appeal. But I don't think the solution is to say the author should lie and say the tweet did break the rules.

Or maybe the rules could be interpreted as "anything that Twitter says breaks the rules does break the rules", meaning it wouldn't be a lie to acknowledge the tweet broke the rules. I haven't analyzed the rules closely.


Just speculating (I hope) but curious what others think.

Say that a video is flagged, user gets the above response and user simply deletes the video. To me it feels like an implicit admission, not having any additional information like "you were right, sorry" or "it is totally incorrect but easier to do this than to fight it".

And say only this metadata is stored, not the video itself, it becomes impossible to judge afterwards.

And say once in a while these logs are reviewed by another party (law enforcement perhaps). Might that be sufficient to end up one some kind of watch list?


Impossible for someone with a spine.


That was my independent takeaway from the article. And requesting an appeal with the verbiage "you cannot possibly be this dense" is a poor way to treat people, especially when the author admits it was the AI banning, not a person, so it wasn't a front line person being dense in the first place.

> There is a clear path to getting unbanned from Twitter, but the author refuses to take it. Shame.

I disagree on the latter part, Twitter would be better without these egotistical people on it in the first place.


Looks like this may be an actual bug in their AI that thinks it is Cat torture.

Someone else got banned too: https://twitter.com/elyuwu_/status/1550373074847113217


Interesting that another commenter got a similar clip banned from same source that also ended in a middle grey blank screen. I have a feeling that the algorithm is using luma only without color information, and in the absence of contrast (blank middle grey) it trips the first check it comes across. Would be interesting to test.


> But there’s also a larger thing here where Twitter apparently has moderation tools so powerful that they can immediately trigger bans for things other than what they’re actually looking for. The actual execution leaves a lot to be desired here, but the itchy trigger finger cannot be denied. Somehow they can do this, but racial slurs are generally pretty okay, white supremacy is largely ignored, and every Twitter troll I’ve reported for harassment gets a three-week investigation and then a “We find they did not violate our rules” email.

This is one of the things that annoys me most about major sites like Twitter or YouTube. Inconsistent enforcement of rules and little to no avenue for appeal.


And engineers and PMs more than happy to look the other way when their creations go so wrong.

I can't help but wonder how much actual revenge porn is missed because their classifiers are this terrible.


The real stinger is that there's no legal requirement to compensate someone for wrongdoing. Ban someone by accident, and then in a "review" uphold the ban despite overwhelming evidence? That should be a fine, paid out in part to the wronged party.


Twitter is entitled to deny service to anyone for any reason. Why should there be compensation for "accidental" bans?


Do you really think it is the best interest of society to give megacorporations free reign to dictate who can use our most popular communication platforms that have captured the market to the extend that many are only reachible via those platforms. We don't let electricity and water companies just cut people off because some machine learning algorithm though those people did soemthing wrong.

If twitter does not want the responsibility that should come with administrating the public square than THEY can find something else to do.


Yes, it is in the best interests of society to let these corporations run their businesses without regulating speech and associations. That becomes a really slippery slope, where the state starts saying what is & isn't allowed to be said, or who is or isn't allowed to be on a platform. A democratic & free government doesn't generally get to dictate what is said or who someone does business with except for national security reasons. Freedom of association is a pretty common right, even more than freedom of speech (which has limits).

The "capturing of the market" that some people are only reachable by that platform is nonsense. There are literally hundreds of physical and digital mediums of communication.

Facebook and Twitter are barely teenagers. Twitter is not doing well, and I wouldn't be surprised if it shrank to a niche in under a decade. Facebook as we know it also is not doing well, though they'll take longer.


They're entitled to, but such a policy might help retention. If there was a competitor that had a wrongful banning compensation policy, I would consider using them over Twitter.

Though I feel a bit silly thinking Twitter cares about retention.


If you only see an accidental ban in this story, you need to reread the story. Twitter, by way of its employees, violated its own policies in the follow-up.


Same thing on Reddit. You can basically advocate for genocide, completely unhindered, by calling entire classes of people "degenerate" (literal Nazi rhetoric), but get banned for calling it out.


Reddit admins are part of the problem in addition to the users.

I actively used to participate in r/machinelearning with long posts & explanations, but the admins show often some favoritism to their buddy posters & I have seen good posts get called out or viciously dissed by regulars. Everyone seems to have an 'active duty Superman' complex desperate to pounce on each other & prove them wrong. The sarcasm is toxic behind the anonymity (HN can be anonymous too - but a lot of us here are pretty open of who we are outside of HN).

I was booted (or whatever they refer) from the sub for calling out a contentious post where the admin responding was definitely wrong in siding with it (but nevertheless exercised his privilege). Long since deleted the account for good.


Not to detract from your story, but it sounds like you are talking about subreddit moderators, not reddit admins. Subreddit moderators are unpaid and only control content on the subreddits they moderate, whereas admins are reddit employees and have full administration powers.

There are problems that come from both of them, however moderators are often the petty power hungry types.


Thank you. Yes that's what I meant. I have been out of the loop long enough to miss these terminology details. Thank you for correcting.


Depends who bans you on Reddit - i.e. moderators on individual subreddits have leeway to make those places whatever they want them to be.


It reminds me of zero tolerance of violence policies at school, where the victims would often be punished instead.


>You can basically advocate for genocide, completely unhindered, by calling entire classes of people "degenerate" (literal Nazi rhetoric),

Are you sure you're not reading too much into it? Without context it's really hard to tell whether a use of "degenerate" literally means you want to genocide some group.


When folks call trans people "degenerate", as many are wont to do on reddit (and as reddit is well known for), it's probably safe to assume they're not just trying to be playful.


Yes, but there’s a Grand Canyon sized gap between being “playful” and calling for literal wholesale murder. I think there’s room somewhere in there for an interpretation like “that person is a super hateful asshole”.


> When folks call trans people "degenerate", as many are wont to do on reddit (and as reddit is well known for), it's probably safe to assume they're not just trying to be playful.

Can you link to such comment that was left unmoderated on reddit despite being reported?


I don't have hate speech bookmarked, sorry.


> I don't have hate speech bookmarked, sorry.

Surely you have instances of the hatespeech you claim that wasn't moderated upon report. Transgenders are probably the most protected group on reddit, so much that questioning their arguments even respectfully is banned from that website. Nobody asked you to bookmark anything, but you need to source your claims.


> Transgenders are probably the most protected group on reddit, so much that questioning their arguments even respectfully is banned from that website

This is demonstrably false, literally any r/ukpolitics thread with a trans-related headline will contain a ton of respectful disagreement and then plenty of open prejudice too.

If we're being honest, be honest, don't twist a narrative the other way.


> If we're being honest, be honest, don't twist a narrative the other way.

Yes, let's be honest, that's why I asked the parent to source their claim and all I got is a lazy cop out.


Does "degenerate" only imply genocide when referring to trans people or has it always implied that? I never got the memo on this one.


Maybe 10 years ago this was true, but Reddit today is heavily censored at the admin level.


> Same thing on Reddit. You can basically advocate for genocide, completely unhindered, by calling entire classes of people "degenerate" (literal Nazi rhetoric), but get banned for calling it out.

Yeah, you'd have to link to that comment that went unmoderated. Given how petty reddit mods are about identity politics on main subs, I'd wager that this sort of speech doesn't fly there...


A few people commenting on the post say they to had a similar bans when clipping through the world and a grey screen was shown. Maybe the solid gray is triggering the AI to pick any policy violation that had gray in it once...


Or, as suggested by another commenter, the AI is triggering off of the camera moving quickly downward while pointing upward, which is similar to upskirting videos.


Its clear that the broken element in this process is not the AI banning the writer of the post, but the poor presumably human review


I'm not so sure I agree.

The typical argument for AI moderation in these situations is something along the lines of "Twitter is so big how could we possibly moderate at this scale without AI?"

Does a company have a right to scale faster than it has the ability to properly moderate? Should they?

Looking at a more extreme example: If General Motors used AI to design cars and they increased the likelihood of fatalities, would we feel any different?


>Does a company have a right to scale faster than it has the ability to properly moderate?

A right? Yes.

>Looking at a more extreme example: If General Motors used AI to design cars and they increased the likelihood of fatalities, would we feel any different?

Getting (temporarily?) banned from Twitter isn't even close to being as bad as a fatality, so the GM example doesn't make any sense.


Human review is pretense, it costs more to do anything else. A human is told to process these as fast as possible, so the issue can be resolved. With these incentives, it wouldn't be reasonable to actually watch the videos in question. That's not what the system is designed to reward.

The human is there to rubber stamp the AI decision as fast as possible, so that Twitter can tell you your case has been reviewed by a human and settled. The important metric is speed, because that directly translates into cost.


It's telling that the AI doesn't just flag something, without banning it, and let a human do a proper review.


An AI arbitrarily banning someone for a completely innocuous post isn't broken?


If the false positive rate is low and human review is swift and not broken, I really wouldn't mind.

It's a force multiplier that every social media platform needs to not drown in costs for human moderation. Especially human moderation as described above.

Unless you get people to pay for social media. Good luck with that.


> arbitrarily banning

If that ban by AI is available for review and appeal, it's just triage that admits false positives. The alternatives that I can fathom is to either allow unsavory and potentially harmful content to be present on your platform until human eyes get on them, or to have all comments await moderation until human eyes get on them.


Good human reviews are, well, nearly impossible at the scale of Twitter or Google. It would require employing hundreds of thousands of reviewers, training them, and paying them decent salaries for one of the most horrible jobs in the world. So, everybody is trying to automate as much as possible in this area. I am sure that in this case, if there is a human even involved in the escalation, it takes them a few seconds to decide on each case, with poor decision quality.


It's the year 2032, and Wings, Jack Dorsey's self-driving taxi service, is a hit. Because Wings doesn't have to pay human drivers, most of their rides are free. The company makes money by showing ads during rides and selling data about users's movement habits. Wings is not profitable yet, but revenue is through the roof, and the company is expected to IPO for a zillion dollars.

Unfortunately, Wings's self-driving AI is much likely to destroy property than a human driver. As a result, some members of the public have called for every Wings car to be remotely controlled by a human. However, others push back. Employing good human drivers would be nearly impossible at the scale of Wings. Here is an interview with a poor grandmother who relies on Wings to see her family, and here's how a nonprofit uses Wings to find homes for lost puppies. You don't really don't want to go back to paid Uber and Lyft rides, do you?

There are a couple of problems with this analogy. But the core point is this—if a company can't sustain a key aspect of their business at scale, then perhaps they shouldn't be able to scale.


There is an equilibrium point beyond which mass protests and regulations pressure start. Every company is iterating and optimizing to find this equilibrium point, and then stays slightly behind it.


I happen to think today's social media companies are well past the equilibrium point.


Well, I don't see mass protests against Google and Facebook yet, so perhaps they are not.

(I do see mass protests against Uber though).


Funny how you get downvoted for saying something everyone on HN agreed on when "Open Letter from Facebook Content Moderators" was published, with comments like "That is a simple problem for which Facebook should be able to apply existing technology. There is a lot Facebook can do, but they are not, because they are lazy.", calling for more automation and less traumatic moderation work.


I didn't agree on it. What likely happened was a bunch of excited folks wrote a whole bunch of words, and that volume gave the appearance of consensus.

Automated review is not good enough for services that are so large that they are de facto public squares. If the world were just, when you got that big, the rules would change to prohibit you from doing service-wide content filtering.

> ...calling for more automation and less traumatic moderation work.

Pay me 80% of what I would be making as a programmer (and keep up with changes in the market rate for programmers), and I'd be totally willing to do that moderation work. [0] There are _tons_ of people who have my intestinal fortitude out there... problem is that the enormous "social media" companies don't want to pay enough to get people who are well-suited for the work.

[0] You might counter with "Oh, well you can't _possibly_ imagine the things you'll see, you're all bluster and bravado and will quit within the month!". I'd counter with "Yeah, well you have no idea the things that I _have_ seen.".


Oh, so they'd have to spend a bunch of money to treat people like human beings?

Boo fucking hoo.

The money is the problem here. We should require them to spend that money, and if they can't remain profitable while employing sufficient human moderators to not be a net detriment to society, they should not be allowed to keep operating.

We've gotten way too far down the rabbit hole of "any company that can conceivably exist must be allowed to exist, so long as they can make a profit."

We really need to get back to companies having to provide some meaningful net benefit to society, rather than just making scads of money for their owners.


OK, press for regulation! So far, I haven’t seen any mass action that would be sufficient for the big tech to change their ways. People are mostly content with the status quo.


Your point being? If you are building a many-billion-dollar company on user-submitted content, moderation is pretty much a necessity. A hospital can't decide not to sterilize because it is "too difficult" either, can it?

Especially when it comes to Google, which tries to entangle itself in every aspect of your life, being banned from all services without any potential for a proper review just because a neural network decided to throw a hissy fit is completely unacceptable.


It's interesting to read the comments here and the debate around whether social media counts as a public square or not.

Ultimately, the web, and social media, is where a majority of our citizens spend a large chunk of their time. While it's certainly true, that you don't _need_ to access them to live your life, for the vast majority of us it would be more than an inconvenience to be banned.

In all other aspects of our lives, we do not accept rules imposed upon us by a large private actor. There are courts, and appeals processes, and politicians, and other channels through which we can assert our societal norms. Not so with social media today.

These societal rules and norms, are, and should be governed by a democratic process. Social media is undeniably now such a large and important part of our society and it's absurd that we do not collectively have control over the rules around it.

Other infrastructure, like trains, were also originally private endeavours where your rights are now protected by the courts, and laws of the land.

At some point, the interest of the collective takes precedence over the private rights of Zuckerberg and Dorsey. We are well past that point with social media. These private companies should be considered access points to the social graph, not owners of it, and therefore regulated as such.


>In all other aspects of our lives, we do not accept rules imposed upon us by a large private actor. There are courts, and appeals processes, and politicians, and other channels through which we can assert our societal norms.

Out of curiosity, where do you live?

Where I live, I accept that my phone will receive unsolicited robocalls and texts, and that the majority of mail delivered to my address is, similarly, unsolicited waste.

Where I live, I accept that the marketing efforts of a coalition of firearms manufacturers carries more weight than schools littered with dead children, a thousand unarmed people shot to death by police every year, thousands annually killed in domestic situations or on street shootouts. There's also, of course, the #1 cause of death by firearm: suicide.

Where I live, a large private actor repeatedly abused the legal system throughout his career, and through a particular quirk in our bizarre republican democracy, was elected to the highest office in the land after losing a popular election by millions of votes. He continued to break laws while in office, but was never held accountable, and the second time he lost a popular election, went on to break more laws, along with his cronies and a crowd of supporters... There are myriad reasons why this private actor should be in jail, but the government where I live has largely been gutted of any regulatory power.

More often than not where I live, you have one choice of broadband provider, provided by a large private actor, who often times has worked with politicians to outlaw choice.

Maybe you live somewhere else, but where I live, we are reaping what decades of aggressive pro-business anti-government action has sown.


I live in the UK.

Many of the things you mention are reasons why I would never consider making the US my permanent home.


> These private companies should be considered access points to the social graph, not owners of it, and therefore regulated as such.

Okay, but how? Every time I see calls for regulation, I don't see any specifics.

Regulation isn't a magic wand that solves all problems you don't like while keeping the parts you want to keep. We got regulation for cookies, and now we're all doomed to click cookie banners on every website we visit forever.

What exact regulations would you apply to Twitter to solve this problem without forcing the companies out of business? Any solution that ends up requiring Twitter to fend off constant legal battles from people angry about being banned or requires scores of humans to moderate content to some government-stated standards or face expensive fines just doesn't work. If US websites were suddenly subject to onerous legal standards that weren't required elsewhere, companies would move their headquarters to other countries ASAP.


Agreed, dealing with bots, fake accounts, troll farms, and the scums of the universe actually posting child abuse, revenge porn, or other such things, is a hard problem, especially at the scale of Twitter.

Designing and deploying mechanism that are effective at preventing those and have zero false positives is very hard.

Having humans in the loop at that scale is very hard.

It's all a tricky balance. You can force people to have zero false positives but it's futile, in practice they still will. You can force them to manually verify each posts, but humans also have false positives, and this might come at a cost that Twitter can't afford. It's complicated.

What's less complicated is running your own website where you can be your own moderator...

I think an alternative is to give people some rights. Like establish a category of things that is always allowed. Then use the normal court system. Of course, it's a terribly slow and expensive appeal process, but also the best, and publicly funded.

Similar to a case where you claim you've been refused access to a job or business because of a protected characteristic for example. You'd need to sue.


Just spitballing:

Then pass a law that says any company with a user count exceeding 100,000 must not prevent access or disable an account unless the user in question breaks the law on the platform.

Then establish a simplified claims court, where for a nominal fee, say £500, you could apply to have your case reviewed by an independent judge.

If you lose, you forfeit the funds, and have to pay the costs to the company you claim against, capped at a reasonable fee. Say another £500.

If you win, the company pays the costs, nominal damages, and must also reinstate your account.

If you cannot run your business at scale without externalising the costs on innocent users through false positives, then the business shouldn't be running at all.


> These societal rules and norms, are, and should be governed by a democratic process.

By deleting your account at services that show such a licentious behaviour, you can cast your democratic vote against such services. Not doing this or at least starting or joining a huge riot against such services as soon as you hear about such licentious behaviour means casting a democratic vote that what the big tech company is doing is all right.


I don't think you know what a democratic vote is.


"Democratic vote" does not only mean "put ballot paper into ballot box". You can also vote with your feet.


Perhaps we need the word "agorocracy" to describe "rule by markets".

That's not quite what you're proposing, since you presumably only want the market to rule over Twitter and not over the rest of society, but unfortunately the latter scenario is closer to reality than most would like.

In fact, suggesting that having a few large social media companies to choose between constitutes a free market (despite the high switching costs and non-interoperable products) is as misleading as thinking that the First Past The Post voting system (and weak campaign finance laws) provides representative democracy.


We want government to run or regulate these systems as long as we trust the government but what happens when we don't trust the government anymore?

Like for example if in this years midterms there's a Congressional supermajority, and then the president is elected from that same party two years later and the courts are of that party.

Will we really want government regulations if things?

There would be widespread mistrust by many groups of people of the government having a single party in charge.

Now that government regulations are established and approved they can be used by the government at their will.


If you don't trust your government then you have bigger issues. And that governmant could always choose to start to regulate Twitter to their advantage if they have that much public support no matter what regulations there are now.

Also, I think if any single party can get that much of a majority (or even just a majority on its own) then your already have an extremely unhealthy democracy even if such setups are quite common in "developed" countries.


For all it's failures the government and courts do a good job at mediating disputes in every other aspect of our lives, so why not this one too?


Many governments don't though. If you think about it modern U.S. is a pretty rare phenomenon throughout history and it's not guaranteed to continue forever.


If we go one step back, bars, clubs, pubs, discos, and churches/worship places were where a majority of people spent their socializing time in prior to social media platforms.

Those places would routinely throw people out and all had bouncers and sometimes gatekeepers for entry and all that.

There are certain specific patterns that we did regulate, like the tradition where certain gender or race weren't allowed in them. But generally, someone causing a ruckus gets thrown out.


Your comparison falls short though, because there's not really a historical equivalent we can compare to.

Being banned from social media is not like being thrown out of a pub by a bouncer. It's like being thrown out of every pub, nightclub, and church by the bouncer.

If I may take an example from my own life: if I were to be banned from WhatsApp, I would not be able to

* take part in my son's school group and social events. * participate in the neighbourhood group * participate in my local sports team * participate in the local party * participate in the business * participate in my university alumni group

WhatsApp completely owns the social graph where I live. Almost to the point where one would not be able to live without it. A false positive ban would be catastrophic for me.


I understand the impact difference could be argued, but some places the local community center or pub would also be the home to most of those. And similarly, it depends on the choice each of these groups make, like to all be on Whatsapp, some could be on Telegram, on Hangouts, on Facebook, etc. Nevertheless, I think you're right, scale is different and maybe that needs different rules because of that.

Though, scale might also be why people can complain so much about it. I know people who have been shunned by their church, and the impact on their life from that is major. But if there's only a handful, they can't be a big voice to complain about it. On Facebook, everyone banned can bend together to complain about and create a counter balance.

Now false positive I think is an even bigger argument. It's probably a lot more rare for a local pub or community center to wrongly kick you out. It can happen, but I'd say there's less chance, and you might get an easier time comming back or appealing.

Something else I'm curious is how come this has so much impact on you, but so little on me? If I get banned from Whatsapp I'm just inconviened, but I can still call or text those people, mail/email them, show up to their physical locations in person, etc.

I think the "couldn't be able to live without it" sounds a bit exaggerated to me, but maybe some places are way more dependent on Whatsapp then I realize with no alternative way to reach anyone?


To give you just one example as to why it would have an outsize impact on my life. All parent association and much communication from school comes over a WhatsApp group with ~60 parents in it. If I were to be excluded then my son would miss out on lots of important information about upcoming events like birthday parties. It would be difficult for me to convince the parents of every other child to send their birthday invite especially to me by MMS or some other means. I participate in a handful of such groups. My sports club for example, certainly wouldn't inconvenience everyone by shifting to Signal or some other service just because I got banned from WhatsApp.


>In all other aspects of our lives, we do not accept rules imposed upon us by a large private actor.

Sure Amtrak is government-run, but many other things we use on a daily basis aren't. Restaurants and stores have their own rules and can kick you out if they think you violated them, without much recourse for you if they were wrong.


see my other comment. In short: it's not like being kicked out of a restaurant, it's like being kicked out of every restaurant.


> Overnight, I received a response that my appeal had been denied. So a human being, someone who works at Twitter dot com, looked at that video, looked back at the rule it was breaking, looked once again at the video, and went “Yeah, this all checks out.”

My guess is that appeals are either processed by some simple code or offloaded to some cheap labour service - such as Amazon Mechanical Turkey or some sweatshop in a developing country.


I would be surprised if every initial appeal isn't deflected by automation, as first step, just to fend off "annoying users" and get rid of those who don't care.


The author responded in the appeal with "You cannot possibly be this dense."

If you want to win an appeal, starting by accusing your appeal reviewer of being dense isn't a good strategy.

I wouldn't be surprised if they have a rule that covers this type of behavior in appeals, and the appeal was denied due to language instead of the content itself.

Is it fair? From a content moderation perspective, no it's not. But I can also see why a platform would not be in a rush to restore accounts where the appellant can't even behave nicely in a two-sentence appeal.


> behave nicely

Towards AI that just banned you?

The appeal process could be another AI. It's concerning how close you are to suggesting we "be nice" to the robots who wrongly ban us in the first place.


If the appeal reviewer feels targeted by that accusation then they deserve it.

Those less dense would understand that the thing being called dense is whatever system made the initial accusation and resulting ban (i.e. Twitter as a whole) and not the random support person processing the ticket who the user has no way of knowing and has never interacted with.


It’s Mechanical Turk. Although Mechanical Turkey sounds cooler. :)


I'll leave it unedited for the giggles.


For the gobbles


For sake of correctness its Mechanical Türqiye :D /jk


strapping a turkey to a keyboard might actually be more effective...


How long until people start selling DRM technology that purposely prevents people from sharing clips of games on Twitter and Facebook by embedding adversarial GAN attacks to trick the AI into thinking that they contain illegal content.


A simpler technique which already exists is embedding invisible unique watermarks in the video output of the game, which can be revealed with image processing[0].

Of course that doesn't prevent the image being tweeted, but it does mean they can punish the person who made the tweet, which might be enough of a deterrent in most cases.

[0] https://www.reddit.com/r/battlefield2042/comments/p319oj/if_...


I couldn't see that having any demand from game companies. They generally want you to share gameplay footage so other people will see it and want to play too, even preventing spoilers isn't worth burning your customers like that. But I could see it getting demand from people who want to use it as a weapon.


Odd. The website fanbytes seems to blocking users from India. Surely a simple blog post must have fairly low bandwidth requirements. And a simple CDN should alleviate issues? Or do they just not like Indians?


Same here, I am connecting from Turkey and we are also banned.

    We've discontinued servicing users from your location and are thereby preventing access to any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Fanbyte.com):
    This decision is ultimately one we were forced to make due to the number of visitors from your region when compared to the operational costs necessary to continue providing access there. It was a tough decision but one that we ultimately had to accept. This is an indefinite decision. We genuinely apologize for the inconvenience this will cause all of those who reside within your region.
    Note: This is not related to Russia/Ukraine or any other political/military actions around the world.
    If you need to reach out to us, please use our support portal.


At least Twitter has an appeal process. My Instagram account was switched off without warning. If I try to log in on desktop, I get a message telling me to "confirm your information using the Instagram app to try to get back to your account". The problem is that if I try to log in to the app, all I get is a black screen. This happens on both Android and iPhone. There are some buttons on the Android version, but the interface is unresponsive. It's like I'm stuck in some sort of verification loop that I'm unable to close.

When I tell people about this problem, they immediately act like I've done something wrong. "Is your app up to date? You probably need to update something", is the common refrain.

I've tried all the online guides. All my software is up to date. All I want at this point is to delete my account. But I can't, since the deletion link takes me to a page that asks I log into the app to try to get back to my account.


If you’re unable to access your email notification settings related to the account you’re locked out of, and have received any non-transactional emails related to the account, you can email their legal team regarding a CANSPAM violation to get a real human on the case (or a payday if they choose not to respond). May not apply to your circumstances; just throwing it out as I’ve had it work before in a similar context.


Wow, thanks for this, but unfortunately I disabled all those emails in the past.


Then your best option is to move to EU (or other regions with adequate consumer protections) and send a Art. 17 GDPR request. That or try to get your local government to better advocate for you. Probably neither is a very realistic option for your but I'm glad that at least some places are starting to look out of tech users even if we still need a lot more.


Damn, I lived in Belgium two years ago. What a missed opportunity!


Why is the consequence account suspension? We know the value these accounts can have and that moderators do not always get it right. Unless, the user has a history of bad behavior, the consequence should be flagging and hiding the content before warning the user. LinkedIn is one of the social media with a more sensible policy by following the latter.


> Why is the consequence account suspension?

Good question.

It feels Orwellian that Twitter suspends you and makes you delete your own tweet, on first offense, instead of just auto-hiding that one tweet.


The wording of his appeal sounds extremely unhelpful and needlessly personally insulting to me.

I'd strongly advise not to ever communicate as if you were talking to the unfeeling corporation itself- because in reality, you are adressing a person that you need to solve your problem and calling them "impossibly dense" is just a bad move.


> So a human being, someone who works at Twitter dot com, looked at that video, looked back at the rule it was breaking, looked once again at the video, and went “Yeah, this all checks out.”

I am going to go out on a limb and assume that the people handling these appeals are expected to hit a certain number per day whether explicitly or not. And to paraphrase Charlie Munger, there's the incentive and this is the outcome.

I imagine someone sitting at a screen showing a queue of hundreds of thousands of posts awaiting review. Maybe there's even a leaderboard showing which employee has reviewed the most this month. Why waste their time watching the video in its entirety? Easier to just deny the appeal and move on to the next.


> So I appealed with my trademark good humor.

Maybe it was not the right time to be concise and witty. You just can't know who reads it, which culture, where and what mood they are in.


Calling the support staff "dense" was probably not the right move..


«Turns out that maybe we shouldn't be letting AI handle moderation.»

It's always the same... People speak up and things suddenly are a problem ("turns out that...") only when the bad thing has happened to them.

If there's any actual point that we should take home is that bad consequences are going to happen, and we'd all better listen up when other people warn us about these things before it's too late.

AI moderation is just one of the topics we are all generally ignoring.


In my circle of friends and family, I’ve been warning them every time I hear of someone unjustly banned somewhere that:

(a) big tech has too much power (b) you should probably not rely too much on Facebook or Twitter for photos, groups, etc

Without fail, I’m “just delusional” and “if someone was banned, it’s because they did something to deserve it”, _until_ they themselves get banned for something.

The most frustrating part - one friend of mine who was banned for something he posted, appealed and was reinstated. When he was banned, “big tech has too much power”, but since _his_ appeal was successful, he’s already back to “anyone who’s banned deserved it”.

I have yet to be banned myself for anything (a fact that my critics like to use as “proof”), but i recognize that if it can happen to someone posting a video game clip, it can happen to me.


I'm now 4 days into a ban from the 10% of the Internet hosted by Akamai because I ran a script that took screenshots of my Pinboard bookmarks (one at a time across many sites) for a couple of days. I cannot find any way to contact Akamai to ask to be re-enabled.

I support stopping bad behavior online and I get that automated detection is hard. But there's gotta be a safety valve for when the systems go wrong.


Consider it as a lesson not to make yourself too dependent on the whims of big tech companies. Ideally, learn to set up your own infrastructure.


https://web.archive.org/web/20220720163040/https://www.fanby...

The site is showing me that they cannot serve me because of the place I live.


I get this on visiting the linked website. What?

> We've discontinued servicing users from your location and are thereby preventing access to any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Fanbyte.com):

This decision is ultimately one we were forced to make due to the number of visitors from your region when compared to the operational costs necessary to continue providing access there. It was a tough decision but one that we ultimately had to accept. This is an indefinite decision. We genuinely apologize for the inconvenience this will cause all of those who reside within your region.


On the other hand, it`s probably good for your mental health that you got banned from Twitter. Idea: delete your account there.


I'm amused that this article ranting about bans is published on a site that itself bans people merely based on their location:

> We've discontinued servicing users from your location and are thereby preventing access to any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Fanbyte.com)


I have been suspended from Twitter for very silly things that don’t match the rule they claim I violated - very neutral statements that they claimed were “wishing harm” or something. It makes me appreciate the actual human moderation of HN.


Sucks when there are false positives but I am glad there are efforts to reduce the likelihood a human has to watch and flag a video that could be “revenge porn”. Plenty of threads on the lasting affects of exposure to this type of content on moderators. (for example here’s a HN thread on a Facebook content moderator’s resignation https://news.ycombinator.com/item?id=26819883)

Seems like additional signals should play into a confidence score though: how new is your account, what content is your account viewing, etc.


> It’s a prime example of maybe how we shouldn’t be letting AI run the world maybe, but this was clearly a mistake. So I appealed with my trademark good humor.

> Overnight, I received a response that my appeal had been denied. So a human being, someone who works at Twitter dot com, looked at that video, looked back at the rule it was breaking, looked once again at the video, and went “Yeah, this all checks out.”

The most cost-effective an efficient implementation of an automated appeals process is the UNIX command "yes no". I'm convinced it's also in widespread use.


I just don't believe that many of these large tech companies actually have a human as the first, second, or sometimes even third contact for these kinds of issues.

But what I find especially offensive is how the execs of these companies make bank while lying about the service they provide. Yeah, the service may be "free", but of course it's not really free. So any suggestion that we should not complain about quality for a "free" service is invalid.


Twitter banned me for calling Maxine Waters an “old hag”.


We are doomed.

Well, more seriously speaking, there's a serious issue behind these anecdotes: with things like Content ID and automated moderation, I see no alternative to any platforms for user generated content with even the slightest of commercial interests gradually transitioning towards alternative outlets for corporate media.

The problem being, I do not see an alternative.

Clearly, there are vital economic incentives for these platforms to pacify those first, who are both the biggest threat to them (in terms of lawsuits) and the sole or major source of income. Clearly, big platforms like these can't be viably worked without any amount of moderation (as there are bad actors out there) and realistically also without at least some oversight of the IP situation. (The latter may change eventually, as soon as there is corporate IP exclusively on these platforms. Direct and indirect content filtering as by recommendation systems will eventually ensure this, with more and more professionalized user channels becoming alternative media corporations themselves.) Clearly, more conventional measures, like community efforts (or sanctioning by social convention), won't do as the world has become too divided for this and there is no guarantee that social norms will always conclude with the vital economic interests of the platform. For the big platforms, there seems to be no way around automation (which is also enforced by the EU), but this is also clearly dysfunctional.

So, how do we get out of this mess? (Any proposals are welcome.)


oh, looks like this site (fanbyte) bans people by their geolocation. ironic.

(this is Türkiye at my case.)

> We've discontinued servicing users from your location and are thereby preventing access to any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Fanbyte.com):

This decision is ultimately one we were forced to make due to the number of visitors from your region when compared to the operational costs necessary to continue providing access there. It was a tough decision but one that we ultimately had to accept. This is an indefinite decision. We genuinely apologize for the inconvenience this will cause all of those who reside within your region.

Note: This is not related to Russia/Ukraine or any other political/military actions around the world.

If you need to reach out to us, please use our support portal.


Let's face it. Amount of recorded game content saved on servers is enormous. Value of this content do not cover costs of saving them. So... Agressive "rules" and algorithms for baning could be just excuses for obtaining space for saving more valuable content.


Here's an example of where metrics go wrong and create bad incentives.

What Twitter (and virtually everyone else, to be fair) are doing is optimizing for the number of issues taht are dealt with automatically by some ML system or whatever. This is a bad idea because it incentivizes false positives and the only recourse is kicking up a big enough stink on social media such that an actual human looks at it. Even "appeals" get automated in this universe.

Here's what you should optimize for: the number of cases a human support person can deal with (per unit time). Appeals must be manually reviewed so will drop the cases/hour. This disincentivizes false positives. If you optimize for this kin dof system you will use automated systems to funnel cases to a human for review.

I know of several people who have gotten suspended or banned off these platforms for completely stupid reasons that any human looking at it would recognize. I'm wary of bad regulation here and certainly don't think people have a "right" to be on Twitter (for eample) but I do believe people should have a right to human review for any ban and you should able to challenge that ban in court if you have to. There should be no "we can ban you for any reason" out.

Twitter in particular is so bad that the usual avenue of getting something to an employee is met with limited success because this had become so rampant (ie necessary) that Twitter was internally cutting down on that.

Lastly, any such ban should be segmented. Google is the biggest violator of this principle. You should never, for example, lose access to your Gmail because an automated system falsely detected a questionable photo you uploaded or another automated system decided your name wasn't "real" (the latter happened in the Google+ days).


Similar but less stressful: had one of my photos marked as "porn" on Tumblr back when the ban first came in. It was a Premier Inn room, focused on the double bed with my rucksack on it. Utterly baffling. Thankfully they undid the warning when I appealed.


I've been banned twice on Twitter for sarcasm that was misread as encouraging violence. Appeals weren't event denied. They just never looked at them. Waited two weeks each time.


And now Twitter, too, or perhaps this has been going on for a while. I thought it was just Google that were AI-banning people left and right with no recourse.


Being banned from Twitter may end up being a blessing in disguise. I don't think open air micro-blogging brings out the best in people.


Oh is ridiculous because Twitter does nothing with accounts that only exist to perform harassment or hate speach.


Stop complaining about policies and implementation of censorship, and switch away from donating content to censorship platforms.

This will continue so long as you donate content to a platform that feels entitled to decide what you are allowed to speak or read.

Fuck that noise. Join us on the fediverse. @sneak@sneak.berlin


This is what I don't get about those US based companies. They hired thousands of employees, why can't they have a few dozens of human employees who manually evaluate the appealing? That's a necessary cost of providing a commercial Web service.


Maybe it would have been easier to get his account back if he had left off “You cannot possibly be this dense” when making demands on the free service he uses at the pleasure of the company that owns it.

Civility costs you nothing. Snark gets you nowhere.


In short: An automated process made a mistake and a customer made it worse by being mean.


To be honest this is where something like the GDPR does do important work. In this case, Twitter have formed an opinion about this user (which they may have shared more widely). It is important to be able to find out what that opinion is and have it corrected if it is in error.


Most likely not. This is most likely a machine learning model that has a confidence score of pornographic material etc. and has mis-scored this post high enough to initiate an automatic suspension.

I doubt they use prior account activity as a metric.


I'm in Thailand and just get this message:

We've discontinued servicing users from your location and are thereby preventing access to any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Fanbyte.com):

This decision is ultimately one we were forced to make due to the number of visitors from your region when compared to the operational costs necessary to continue providing access there. It was a tough decision but one that we ultimately had to accept. This is an indefinite decision. We genuinely apologize for the inconvenience this will cause all of those who reside within your region.

Note: This is not related to Russia/Ukraine or any other political/military actions around the world.


From https://eu.forums.blizzard.com/en/wow/t/a-good-tip-for-peopl...

The following countries have been blocked from accessing any of the websites owned and operated by ZAM Network, LLC DBA Fanbyte (the owner of Wowhead. com):

China Indonesia Philippines Thailand Turkey Serbia India This decision is ultimately due to legal compliance and liability issues with respect to these countries and their laws/regulations. This action was not based on any specific law or regulation in any one or more of the listed countries. Our decision was due to the legal costs involved with reviewing the laws and regulations of the listed countries, in contrast with the amount of visitors/revenue earned from them. As a result of this analysis, we decided not to pursue a legal review and to mitigate our liability in these countries, we opted to block access entirely. This is an indefinite decision. I and the rest of the Wowhead staff genuinely apologize for the inconvenience this will cause you. It was a tough decision but one that we ultimately had to accept.


I wonder if some of those countries could come up with a unified regulatory framework so companies have a way of bringing down the compliance costs of serving those countries, rather than freeze them out entirely.


I'm surprised to hear there's that much regulation in Thailand (if it's true). I do know a couple porn sites are blocked here though

I own a site that's monetized primarily off display ads and I can tell you that advertising revenue in those countries is abysmally low so they probably just don't want to expend any effort to support those countries


Thailand has laws about how one can talk about the king, right? Maybe there's no safe harbor provisions for websites, and a site this size is too small to fight, but big enough to attract attention?


It sounds like they aren't basing it on how much regulation the jurisdiction has, but whether the jurisdiction brings in enough revenue to justify reviewing the regulation in the first place. Even if there is no regulation, you need a lawyer to tell you that.


What's the threat of not loving up to their regulation when not being located there? Isn't the worst scenario that they just block your website on their end?


This is a blanket ban of multiple countries. It's most likely something very unspecific and probably unrelated to local regulation situation. Also fanbyte seems to be an influencer marketing platform. If they wont serve influencers in these countries anyway, it makes sense for them to block all users from there to stop messing up their metrics or targeting or whatever.

This isn't really much value to most people. If you ask me the less influencer crap that exists in this world, the better.


Yeah actually I completely forgot about that too somehow. And there are strict libel laws as well


I hit a very similar message several times when I was traveling in Colombia, including on websites that I specifically needed to access to plan my further travel. It's extremely annoying. I suspect there is some button on Wix or a WordPress plugin that webmasters are naively using to fight spam that's inadvertently locking out legitimate users from certain countries.

It's a really big step backwards for the idea of a "world wide web" and global internet, imo, and plays right into the hands of authoritarian nations who are trying to sell developing countries on the idea of "internet sovereignty".


So they host articles that complain about being banned on Twitter, but are ok with banning entire countries?


The article isn't about being banned on Twitter. It's about automated moderation (a side effect of which is, yes, a Twitter ban.)


The title of the article is litteraly "How a Clip From Stray Got Me Banned From Twitter".

I fail to see why also discussing the reason of the ban takes anything away from my original point.


Because the thesis is that automoderation is a problem. The entire point of the article is using Twitter as an example of a problem, not the root problem itself.


I'm getting the same message from the OP's website, and I'm in eastern europe. Weird.


Serbia? Apparently they bann these countries due to the cost of legal review (?):

China Indonesia Philippines Thailand Turkey Serbia India

Wowhead is owned by the same company and this is their reasoning: https://eu.forums.blizzard.com/en/wow/t/a-good-tip-for-peopl...



Could you please elaborate? Twitter stopped working with particular ISP because of cost related problems?


The host of this article has decided to stop providing service to Thailand. Maybe because of the cost of compliance with internet regulations there?


Hard to elaborate since I've been blocked and can't see anything except for literally just that message


Well at least it seems there are other folks to comiserate with you. I'm based in Thailand too and this isn't the first time the region blocked from content that seems like it should be benign like a new article.


Eh... It's no big loss. I can live without it.


I was Googling around for why this might be. The best I can guess is that it has to do with the Thai PDPA (Personal Data Protection Act) that recently started getting enforced. This is the equivalent to the EU's GDPR act, but whereas it might be worth it to Fanbyte to take steps to comply with GDPR, the Thai market may not be a big enough part of their readership. This is just speculation.


It seems like the problem with many of these tech companies (and many modern companies in general) is that they've scaled faster than they have the ability to operate properly.

    "How can we possibly moderate properly without AI?"
is really just another way of saying

    "We don't want to pay the money it's going to cost to moderate effectively and we're going to cross our fingers that we can solve problems caused by technology with more technology."
That failure to operate is causing harm but the kind of harm it causes is hard for the public or politicians to understand, so it goes on unaddressed.


It's not "want", Twitter (or youtube, or facebook, etc. etc.) can't in a million years afford content moderation because of the sheer firehose of content. They'd literally have to employ everyone on the planet to get enough moderators in every language for every tweet, post, video, etc. on the platform.

And then only one company would have enough moderators. Moderators are a finite resource, there aren't even enough humans alive to moderate all content on all platforms.

But, the law goes "well figure something out, you have to moderate", and now we've by law guaranteed that whatever solution that gets implemented to follow the letter of the law is going to be shit, because it's by definition going to be automated content moderation, and given the volumes involved, it's modern neural-net based because that's the only technology that's even remotely shown it has an over 50% success rate.

Well done, us.


You don't need human eyes on every single tweet, just need enough people to review the flagged content. You could still have algorithms flag things and rank them by confidence levels. You could also hide the content first and queue it for human review instead of auto-banning the user. It's a lot of work, but I feel like there's a lot more that could be done.


It feels like you underestimate how much content exists on twitter.


There's under a billion tweets per day. If you need to employ every human to moderate this you're saying it takes about 9 working days to moderate a single tweet.

If a moderator had to approve every tweet before it goes live, a terrible idea and yes, very expensive, I think a moderator could do a dozen per minute, rather than one every 72 hours like your logic implies.


We're taking the "every human on the planet" seriously instead of considering it hyperbole for "we would need to hire an insanely order of magnitude higher of people than are qualified for that job"?

And humans aren't robots: you can't moderate dozens of tweets a minute and still meaningfully moderate. You can do that for maybe a few minutes before any pretense of moderation has been replaced by mindlessly clicking "okay" until the light turns on and the pellet-dispenser goes "ding!"


My frustration isn't really rooted in legality and has more to do with my personal beliefs about tech and unregulated capitalism.

I've worked in tech a bit - I know how this goes. Some company wants to get a massive valuation by hitting DAU's in the millions, but you can't do maintenance/support with that many users in a single org even if you wanted to. As an engineer, I can totally relate to the feeling of "fuck support - that sounds awful", and I struggle to blame _them_ because, well - like I said, even if they tried they couldn't possible cover that much work due to an inability to scale their organization.

Psychologically? It's pretty easy to get pissed off at these companies when this happens. These companies are growth-obsessed and the ostensible philosophy is always "making the world a better place". STFU and do your jobs or don't build a product that can't scale and try to pretend it's scaling just fine.


Your reply assumes that these businesses must continue to exist at all costs, effective moderation be damned. If it’s true that they cannot afford good moderation, then maybe it’s not a sustainable business model?

I know online content platforms are wildly different than other industries, which is why these problems exist at all. We don’t seem to have good solutions yet. But we wouldn’t have any issue declaring that any other industry that couldn’t afford to do the “quality control” well enough isn’t a sustainable industry.


It does not: it assumes that corporations have their own interest in mind, and are kept alive at the cost of "as long as they are profitable for (most of) the owners".

Whether we, the people who have to suffer them, want them to exist or not does not particularly enter into this. If that's the part we want to take aim at (and maybe we should? or maybe we shouldn't?) then that's an important but completely separate issue.


What if there is no solution? In my opinion Twitter, YT, FB and others are, at this moment, simply too big for this world and our society. We've created something that should not exist and we have no way of controlling it and no way of incorporating it into our society and our minds without causing a great amount of damage and introducing a lot of unfairness and inequality.


There are other areas of life where the “firehose” is simply against the law because of safety concerns. We might decide at some point that one Facebook was too many and get off this merry-go-round.


They need to crowdsource the moderation job. Something like voting and flagging but cleverer.

Of course this might deprive them of the ability to censor at will.


Why would crowdsourcing be more reliable than initial NN filtering? And of course, let's consider application bias: do you think the reasonable folks would volunteer to moderate twitter, or only the people who think it's a way for them to manipulate the content that shows up on the platform?


They volunteer to vote just fine. Could a moderation system be built on that?

I mean, you vote. And the people you vote also vote. And those you vote highly can maybe be trusted to vote accurately and so on. A lot could be done with that voting data.

And maybe you could even get them to vote twice or something.

What we're shooting for is totally user driven (because moderation can't be trusted) and low effort (because users are lazy)


When you say moderate without AI are you suggesting human moderation? This is not really a good solution to this problem either. Paying people to traumatize themselves by watching video after video or reading tweet after tweet of the worst content society has to offer is dehumanizing.


AFAIK, most of the moderation on Reddit is done by unpaid humans, not AI. Reddit isn't perfect, of course, but their approach does scale.


As someone who helps moderate a very large subreddit I can assure you we write our own bots to help. A lot of stuff is manually reviewed "in queue" and we review any complains in modmail manually.

But there are too many comment for us unpaid internet janitors to not use our own robots. Also since we are not paid our caring levels are lower, thus heavy use of regex and automation.


Moderators on Reddit are extremely petty and vindictive and I have hypothesized for a long time that this is because they are not paid with money.


Yes

1. If they didn't get something out of it, they wouldn't be doing it.

2. They don't get money out of it.

3. So...


Most moderation on Reddit is done by automated bots. Without those it wouldn't scale so humans are not enough.


>reading tweet after tweet of the worst content society has to offer is dehumanizing.

so is being a human being told that you're in the wrong by a robot that obviously has the wrong idea about the situation, but one side is paid to do it and is offered therapy and counseling.

aside : spare the human by using an AI for the first part; appeals need to be handled by humans. If the issue is the exposure to hatemail then all I can say is that humans in various industries have been able to cope with reading hatemail without major incident for some time now, as unpleasant as it may be.

Another aside : all employment is de-humanizing to some degree; the contract between worker and company is literally "let me pay you for the finite time you have on this earth so we can increase our financial bottom line." -- regardless of the mission statement it is rarely a motivation any more deep than that.

Let's at least keep the dehumanizing behavior focused on the people who are paid to experience it.


The issue is not exposure to hatemail.

The issue is exposure to child pornography and videos of people literally dying, all day every day. These are not comparable to the pain of being banned, and therapy is nowhere near sufficient.

https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...


With good moderation it won't spiral out of control that much.


What? Even with the best moderation, humans will periodically brigade a site with awful content because they think it's funny to do so.


Per the article, there was a human involved in an appeal for this matter.


Per the article, per Twitter. It sounds unlikely. Though it could be that there's a guy paid to look over stuff the AI flags. If that's the same guy who does the appeals, they might reject the appeal in order to hide that they didn't actually look at it before banning (and hope the complainant just goes away quietly. They probably often do.)


I don't think it's even the money anymore.

I think technologists are convinced they are ALMOST there with automoderation systems...just a few more data points and another thousand hyperparameters and it'll be perfect!

I think they've yet learn to recognize the good because they're too busy dreaming of the perfect.


Do you think that Twitter could possibly survive as a company without large amounts of automated moderation?


No. And that's why Twitter should not exist as a company. If they were legally required to have human moderation, they would go out of business. And IMO the world would be better off.


I think if the phone companies can't monitor every conversation to make sure no one is planning a crime, they're clearly not actually able to scale in a correct manner and therefore we shouldn't have phones.

(Obviously a different situation, but by what principle is it different?)


> Obviously a different situation, but by what principle is it different?

The principle is that one is a public conversation, and the other is private.

Private conversations between two people should not be moderated. Twitter should not auto-moderate Direct Messages, and they don't AFAIK.


Do we need Twitter that much? Is it worth ruining the society in the name of having twitter or a similar social media?

90s and 00s with all sorts of independent discussion boards were great. Somehow moderation kept up without crappy AI. I can’t see a single benefit of Twitter/FB/etc over fragmented forums as a user.


I’m guessing the “somehow” has something to do with the order of magnitude fewer users on the internet.


More like forums were fragmented. Topics, geographically etc. And politics was either sophisticated forums or just a byproduct of those fragmented topics.

More users, many of them lurkers, would just result in more fragmentation and niches in that old world.

The old forums was safe from feeling of censorship too. If you didn’t like mods too much, you could not easily go to another forum or start your own.


With fragmentation, several other things also happened. One, getting banned from one community was not the end of the world. You could join another. Two, because the forums were small, if you got banned from one, you couldn’t seek sympathy by complaining to the entire internet about the unfairness, because nobody knew about that community so getting them to care was a challenge.

And three, you couldn’t complain to the entire internet because there was no soapbox large enough to do it. Only your friends and the new community had to listen to you complain.

On the flip side, an angry user with some technical abilities could poison the well by trying to take your community offline. Whether they succeed or not depends on who has more patience, the custodian or the manchild throwing an extended tantrum. That is at least a one place where centralization helps. Getting resilience set up is a full time job. Though I suppose you could say that’s an area where technology might actually help.


Previously one needed technical abilities. Now one needs to cry enough to get someone or a whole group banned from the unified platform.

For a small forum being offline for a day or two is not an issue. Even if admins prove to have less patience than the attacker, the forum hivemind can easily migrate to another, new or old, forum. With some losses, but it's still just changing one small soapbox to another. Not a massive endevour like moving off FB/Twitter to self-hosted.


It couldn't, but what all of these companies could absolutely do is pair automated moderation with human review. When AI flags something as innapropriate, a human has to sign it off before it's removed or the account suspended.

The only exception for it is when content matches a known hash for illegal content - then it should be removed automatically.


How many automated-review actions do you think Twitter performs in a day that would require such a review?

How many seconds might each human-review take?

What false-positive rate is acceptable on that human-review?


Is that an argument that they shouldn't do it then? Because right now, if this article is to be believed, they don't do manual review even on appeals. That's obviously not ok.


As other commenters point out, that characterization is not not accurate. There were most definitely humans involved, including on appeal.

Does that make things OK? I suppose that depends on whether you object more to the decision or the process.


I think this is a fair assessment.

Big social media companies are not incentivized to care about psychological and social side effects of their product.

A bit like cigarette companies offloading externalities onto society, but much harder to specifically identify and address.


One complaint is that moderation gets worse when dealing with less common languages/cultures.

Not long ago there as an HN post about someone being banned for being Syrian (sanctions) when they were speaking in Syriac.

For a non-native English speaker, Syrian (Arabic) and Syriac (an ancient language not spoken for over a thousand years) is really hard to know.

There are going to similar mistakes in every language. Getting a native speaker for some languages is borderline impossible.

Some languages don’t even have native speakers (Syriac, for example).


This is how big tech does damage to humans.

Love it or hate it, the online world is our current public square. A public square, in the united states, has typically been an area where people can shout out their concerns or whatever they want to anyone who can listen and the government cannot stop it.

Currently, our public squares are owned by private megacorporations that not only ban people for reasons many people agree with, but also for insane things like this. There is no appeal unless you convince enough others (using another public square) you are right and they also scream for you.

Eventually, all of us that are 'ok' with this will end up being banned from one service or another for life, with no appeal, and thats that. A perfectly pristine public square filled not with people, but consumers staring at advertising boards.

I would love to know a way this can be solved as its painting a depressing picture. I don't like how this makes me feel.


I’m ex Facebook software engineer (even had selfie with Zuckerberg as the avatar picture) who got banned on Facebook. Up to this day I have no clue what for. But it’s extremely annoying because I lost lots of photos that I didn’t back up and close to thousand contracts of people. Users should definitely own this data, not a corporation.


The irony here is that I deleted my FB years ago and back then at least, FB only permitted deleting one photo at a time. Took me well over an hour to delete all of my photos.

In hindsight, I should've just done something obviously ban-able instead.


There’s no guarantee that either method results in your photos being actually deleted.


I'm aware of that, I just wanted them removed from the public view.


Has the deleting single photo thing resolved by now? I too remember deleting close to 100s of photos one by one.


Iirc it is, at least if you live in Europe


Maybe you got banned because you had a selfie with zuck as the profile picture.

Some automated system trying to prevent celebrity impersonation might have flagged you.


Someone maybe disliked you in your old team


Surely you had enough contacts at Facebook to get somebody to look into it for you? Just another concern with tying the Oculus Quest to a Meta account. Google accidentally banned one of their own developers and he almost pulled Terraria off of their Stadia platform.


[flagged]


I'm as, "Fuck Facebook," as the next guy, but... yikes.


I agree with you, yikes. But the concepts of rehabilitation and reintegration are not known to everyone.


Nuance and understanding, too, it seems.

They said "ex engineer", but nobody bothered asking what they worked on or why they left. Instead, you've got a bunch of people suggesting that they should be kept out of the rest of society because their resume says "Facebook". Boggles the mind.


This was uncalled for man.


Why is it ok to collectively hate bankers, people working in big tobacco, salesmen in big pharma and many other industries that we know for being damaging to society, but when it comes to Big Tech we suddenly think "it is uncalled for"?

Facebook is a deplorable company and its leadership is known to have a completely broken moral compass. But why should we hold only the leadership accountable? It is the developers, the product managers, the biz dev people who are actually getting their hands dirty, and you can bet that no one is doing it against their will.


I think it's a stretch to say that most people hate every single person working in those industries.

We don't hate the tellers, we hate the people running the banks and fucking around with our money. We don't shit on Amazon warehouse employees, we shit on the people making their lives hell. In a similar sense, we don't know what this person's role was as a software engineer. It's one thing to discuss the morality of their employer with them, it's another thing to go straight for their jugular and write them out of society entirely just because their resume says, "Facebook".

And, as I said in this same comment chain, I am incredibly "Fuck Facebook" as a whole.

It would be great if every single person had the luxury of making employment decisions primarily based on moral code. The reality of life demonstrates that that is not always viable for everyone.

Edit: Do we assume that all Starbucks employees, even the ones trying to unionize, are anti-union because that's the corporate stance?


A bank teller is not involved in the implementation of the process and systems that define how banks operate. A software engineer is instrumental to how a company operates.

> It would be great if every single person had the luxury of making employment decisions primarily based on moral code.

We don't need to have "every" person in condition to make such choices, but there are plenty of people that can, and these people are failing us. If you are smart, educated, and have enough work ethic to get a job at Facebook as a software developer, you certainly have what it takes to work anywhere else.


>A software engineer is instrumental to how a company operates.

>If you are smart, educated, and have enough work ethic to get a job at Facebook as a software developer, you certainly have what it takes to work anywhere else.

There is a lot to Facebook, and a lot of various internal and external software that requires SWEs. You lament that people in OP's position "are failing us" because they're not doing more to speak out about FB's bullshit, and that they are better-positioned to get a job elsewhere, yet I have only seen you spout negative rhetoric towards OP and make zero effort to understand more.

You haven't bothered asking them what they worked on at FB, nor have you asked them why they are an ex FB engineer. You have no idea if they contributed to fucked up shit or if it's the fucked up shit that made them leave (isn't that departure what you want, after all?).

Your general sentiment isn't wholly incorrect, it's your determination to kick nuance to the curb and paint everyone with a broad, negative brush that is the issue.


> they're not doing more to speak out about FB's bullshit

"Actions speak louder than words".

I care very little about what they are they "speaking out" about. My point is that Facebook's reputation is not new. Its privacy abuses, the practices for exploiting user data for their own benefit, their "move fast and break things" mentality that applies even to things that affect the social fabric of the whole world... everything has been out there in the open for at least 2010, when the first "delete Facebook" campaign came up.

So, it's not like people went to work there without knowing what they signed up for. The were not being offered generous compensation packages to work on ways to make the company less abusive. Their UX designers were not being hired to find ways to make users less dependent on the app. No developer was rewarded for finding a clever ways to determine if the user was a teenager and therefore unable to have access to the website. No data analyst got any bonus because they implemented a system that reduced data collection and increased user privacy. Quite the opposite, actually.

> You have no idea if they contributed to fucked up shit

Anyone working on Facebook at a position that contributes to how the company operates and makes business is by definition working on "fucked up shit". Even the "cool open source projects". Even John Carmack while working at Oculus. Even the people working at WhatsApp who were from the beginning hoping to keep it clear from Facebook influence.

Anyone receiving a paycheck from Facebook should be aware that their money comes at the expense of a lot of societal harm. They might be even okay with it (it is a good paycheck after all) but anyone with a modicum of ethical conscience would simply reject to work there.


I don't disagree with your general statement, but you have no idea when OP worked there, you have no idea why they joined, and you have no idea why they left. Were they there pre-2010? Did they join after thinking maybe it wasn't so bad and then quit when they found out it was?

We don't know, because you won't ask. Again, I agree with your general views on FB. But when there's someone right in front of me, I'm gonna talk to them and find out more before I tell them that they shouldn't be allowed to be a member of society. Life is nuanced, and the degree to which we harbor hatred towards those doing harm is nuanced, too. I will always be initially apprehensive of someone like OP, but at a macro/individual level I will always work to understand more - it's the fair thing to do.

If you want to ignorantly paint everyone with such a broad brush, go ahead.


> I don't disagree with your general statement, but you have no idea when OP worked there, you have no idea why they joined, and you have no idea why they left.

I already responded on another comment: it's not about the individual and it's not about wishing ill of anyone. Hate the sin, but love the sinner, and all that...

In any case, I knew that OP's name was familiar, so I went to look at his history. If you do the same, you will find a comment where he will talk about "being excluded from the banking system". [0] The part he doesn't say (here, but it is on his LinkedIn) is that he was "CTO" on xSigma, which is one of the many "revolutionary DeFI" projects that took a lot of money from gullible fools and now have nothing of value to show for it. Not only that, he also uses that same comment as an opportunity to shill for his next project.

IOW, even though I didn't enter the conversation to "pile on" against the individual, in this particular case the evidence is really stacked against him.

[0]: https://news.ycombinator.com/item?id=31861753


I agree. And, "just following orders" (i.e. "just doing his job") isn't a legal defense to following unlawful orders. Nor should it be a moral defense to doing unethical things just because there's a paycheck involved.


Can you tell us OP's "orders" were, though? Because at this point you're just speculating and assuming the worst. Your comment about people working for unethical companies for a paycheck also comes off as classist and out of touch.

Edit: Hell, they said "ex" engineer. For all we know, they could have very well been there early on, taken issue with what they saw and then left because of that. Yet you and at least two others are assuming the worst and suggesting that they never be allowed to integrate with regular society.

Jesus.


Anyone who's been awake for the past 5 years knows how shitty and unethical a company Facebook is. If they'd been there since the beginning, they could have figured out how lets's say "flexible" Zuckerberg's ethics were. And there are plenty of companies that are ethically benign, if not doing real good in the world. To suggest that people should act ethically, even when money is involved, is not classist. And, if you think it is, consider what "class" a(n ex-) Facebook engineer is likely to fall into. (Hint: the one with the financial means to care about ethics.)

And I will thank you very much to stop putting words in my mouth. I have not suggested that anyone not be able to reintegrate into society. I am, however, suggesting that they pay their dues back to society before doing so.

Tell me: what acts do you feel are justified by "just following orders," and under what circumstances?


Yet still not a single attempt to understand OP's perspective, actions there, time there, or reason for departure. Just an insistence on an immediate straight shot to, "Nope, they have dues that must be repaid no matter what".

Got it.

Edit: And again, I must emphasize nuance. People talented enough to work at a FAANG absolutely can have significant financial issues, outstanding debts, etc. that may make them prioritize finances above ethics. I am surprised that this continuously seems so lost on HN users.


> People talented enough to work at a FAANG absolutely can have significant financial issues, outstanding debts, etc. that may make them prioritize finances above ethics.

You can not use the exceptions to morally justify the general case.

Let's say that 2% of the people took a white-collar job there because of some absurd hardship. What is the excuse of the other 98%?

Make it higher. Let's say that we are living in a bizarro world where 10% of the people that are well educated, talented, energetic and able to go through such a grueling interview process, yet they are facing some type of hardship. What is the excuse of the other 90%?

Also, how long do they need to continue working there until they can resolve their situation? If they took a job just because of the pressure of their hardships, presumably they could settle the score after 2-4 years and then move on to something else? Are these people doing that?


>What is the excuse of the other 98%?

Forgive my bluntness, but I don't know how many times I've said this in this comment chain to you and others - fucking ask them. An ex-FB SWE posted here, someone ate him alive for simply having worked there, you piled on, and now you're rhetorically asking me what their excuse is for taking that job.

I don't know, and I won't pretend to. I've been saying y'all should ask, and here you are wondering, so... ASK OP. Talk to them. Have a conversation. They're right up there!


You seem to think that the point is about the particular individual and their story. It's not. I'm not "piling on" anything.

My point is that we collectively criticize the leadership on business like Facebook (or any "FAANG" for that matter), but we give a pass on all people that go work there. My point is that we should (collectively) start looking at them as enablers and also be responsible for all the damage that the companies have done.

Instead of saying "how cool, Johnny is making 400k/year to work at Google, I wish I could do the same", we should be saying "What's up with Johnny that he needed to sell his soul for $400k/year?"


Not going to answer my direct questions to you, either, huh?

Got it.


>Not going to answer my direct questions to you, either, huh?

... it was a single question, and you edited it in as I was responding.


You've seen it now, and I asked you to "consider" something as well.


You're coming off a bit more aggressive than this conversation necessitates.

My response to your "consideration" is in my edit. Asking one to "consider" something is just that; asking them to think about it. It does not ask for a response (yet I did give you one while you were chastizing me for posting while you edited yours).

>what acts do you feel are justified by "just following orders," and under what circumstances?

There is a theme here - nuance. I've worked (past tense, and I'll have you know that part of the reason for my departure was my disagreement with this partnership) with FB as a partner in the past, and I am well aware of the incredibly wide variety of roles there. This is exactly why I am insisting that we talk to OP and learn more before we decide that they're a stain on society, need rehabilitation, whathaveyou.


What's wrong with Facebook? I really like their features.


Mostly the spying on people against their wishes part, then exploiting this data in commercial purposes. Even if you aren't a member, Facebook keeps a shadow profile on you, so it's next level surveillance.


Really? Stopped being hyperbolic. The guy did his job.


Here's a fun story. I see that this post is from fanbyte.com.

Existing got me banned from fanbyte.com and all of the websites they own.

Specifically, existing in a particular country. Not an exotic or unusual one on an embargo list or something -- just some country that Fanbyte felt they weren't earning enough profit from.

They banned the entire populations of about ten countries from their entire network of websites earlier this year, for that reason.

If this guy who got banned from Twitter works for Fanbyte, I have no sympathy. I'll never know the details of the story though, because Fanbyte is out there banning entire nations.


But could that be Fanbyte's way of dealing with a specific country's laws? Perhaps they don't have the manpower and those countries are small enough that the only cost effective option is to just not operate in those countries.


Somehow it's only Fanbyte that have this problem with people in Turkey / Serbia etc. Why has no other publisher went to such measures?


Incredibly, they had a remote employee in Serbia, so when they banned all these countries, they banned one of their employees. Last I heard they were telling him to use a VPN to bypass their own corporate rules (which they have admitted exist to dodge Serbian laws...!)


Which is fine.

Because if the company has no physical presence in a certain country, they can basically ignore their laws, because what’s the country going to do? Block your website?

Having an employee there establishes physical presence.


Right, but their stated reason for instituting the policy in the first place was that they couldn't ignore Serbian laws if their websites were available to be viewed in Serbia.


> they couldn't ignore Serbian laws if their websites were available to be viewed in Serbia.

With no presence in Serbia, they absolutely can. Serbia has got virtually no recourse in such a situation.

What are they going to do? Sue then in Serbia? They’ll never show up. Then trying to enforce the jugdement overseas will be a monumental (and also completely pointless) exercise.

Try suing someone in, say, the United States for violating Serbian law. Your case will get dismissed because the said US court would have no jurisdiction over the matter even if the company was based in the US.


Absolutely, I agree with you. The only party who disagrees is Fanbyte in their official statements about why they pulled out.


I'd consider the Serbian employee at risk if I wasn't following any applicable Serbian laws. Their explanation seems to be that even the legal review of these countries would cost far more than any revenue they are making, and their lawyers are advising these countries need that review.


But different companies can make entirely different business decisions in such situations. Another company may decide devoting extra resources is worth it. It is a subjective value assessment that can go either way.


Don't have sympathy for an individual - instead, consider the path this is taking humanity down and consider what happened to you.

A bigtech company made the decision that the voices from your country were not worth being heard so banned you from global discussions. Individuals can also be banned, with no recourse, due to an AI model gone awry.

How do we deal with this and make things better? Legit question - I don't know myself and it eats at me sometimes.


China, Indonesia, Philippines, Thailand, Turkey, Serbia and India

Just wow.


[flagged]


It's none of those countries homie, thanks for letting us know you hate random countries or whatever


That's .. about 75% of the world population?


Uh, no it's not. Total, they are about 2.5 billion, about 1/3 of the world population. 75% of the economic activity maybe.


Well, Japan has APPI and as far as I know APPI is semi-equivalent to GDPR (which would be the EU's "horrible" Internet law, which doesn't apply only to the Internet btw). So you might hate Japan as well, right?


At least, Japanese court will reject to apply these law on foreign entities for they have no jurisdiction.


And the EU tries to apply it to local subsidiaries if those are "main establishments" of data processing. There is a provision on finding the main establishment to avoid loopholes where local subsidiaries in the Union could hide the main company behind a legal entity in the Union (as was the case with Google LLC when they got fined by CNIL). Japanese courts not having this provision is a major drawback, which I don't believe they would leave open.

Relevant GDPR regulation:

> Article 4(16) of the GDPR states that ‘main establishment’ means:

> - as regards a controller with establishments in more than one Member State, the place of its central administration in the Union, unless the decisions on the purposes and means of the processing of personal data are taken in another establishment of the controller in the Union and the latter establishment has the power to have such decisions implemented, in which case the establishment having taken such decisions is to be considered to be the main establishment;

> - as regards a processor with establishments in more than one Member State, the place of its central administration in the Union, or, if the processor has no central administration in the Union, the establishment of the processor in the Union where the main processing activities in the context of the activities of an establishment of the processor take place to the extent that the processor is subject to specific obligations under this Regulation;

Keep hating the EU, US and so on but your local place has very similar provisions around data privacy and it's considered almost completely aligned with GDPR.


Eh, I don't buy the public square argument.

The online world is a cesspool filled with bots and crazy people. I don't need to voice my opinion on Twitter, nor is Twitter a place I need to be.

What matters is how we relate to the people around us, in our communities. Your neighbor isn't a bot, the barista at the coffee shop isn't a russian troll. We need to stop taking the "online world" so seriously and connect with the people around us more.


If I am interpreting this correctly, and I may not be, it seems like you are saying "The online world does not contain public squares, and that is because we need to change what everyone else in the world considers a public square."

If I am interpreting that correctly, I don't know what I can add here. I don't necessarily disagree, but I think we have moved past that possibility. How many of us know our neighbors and talk to them? I've lived in my home for a few years and I don't even know their names fully. But, I now know yours and the fact we (at least slightly) disagree.

Right or wrong, I think humanity has moved on and we are now connected.


I didn't know many neighbors until I started walking a cute, friendly dog. You should try that.


Nearly all my conversations with neighbors involve dogs or flowers.


The real world is also a cesspool filled with crazy people and people trying to abuse the system (something analogous to bots). Also spammers and harassers.

The difference is that on the real world public squares there’s no amplification effect that we have on those platforms, save for some very closely watched exceptions (politicians, celebrities). Also very little anonymity.


The real world is also a cesspool filled with crazy people and people trying to abuse the system (something analogous to bots

No, the real world is an excellent place, filled with great people everywhere.

There is no comparison with the twitter/facebook mob, where 99,99% of people strive for your attention, for monetary gain.


> No, the real world is an excellent place, filled with great people everywhere.

The internet is also filled with those same people. That includes Twitter and Facebook. The reason the bad ones are are extra loud in the internet is because some websites amplify their messages, whereas normal people don’t want the extra attention and prefer privacy.


Yeah, I don't think the problem is public squares, but force multipliers that certain actors, especially well-funded ones, can employ (meaning mostly state actors, but also corporations and other interest groups) can use to dominate digital public spaces.


The bot solution is extremely simple, add more layers of human verification or a paid tier. It hurts the user experience in one way but enhances it in another. Why not at least require verification once per 24 hours? The trolls in person is not any better, we have enormous amounts of crime/lies in the real world. Being online actually protects users physically so much that now arguments are more over what is harming users mentally online. It seems dramatically better to ONLY risk your mental state than risk physical and mental harm in the real world. Someone being online has gotten a bad rap because people can get feelings hurt, but at the same time there are countless options to protect your feelings online also without banning other people. People have the tools to build a bubble around themselves but for some reason instead they often choose to build a prison around others instead.


I get that. The problem is Twitter/TikTok/Facebook does and will spill into the real world and truly effect you.


Not to mention the Orwellian proposition: "delete the pic we deemed wrong so we can unban you". Apparently it's hard to just delete the offending pic, it's better to teach you a lesson and make you do it.

There are four lights


  The torturer wanted him to say that there were five lights as that would be a signal that Picard was now accepting the torturer's reality. It's based off a concept from Orwell's 1984, where another torturer says that they can make someone believe 2 + 2 = 5.
Also, Asch's “Line Experiment” where people conform against their own perception.


Personally I prefer getting a message "delete this please" instead of some of my messages/pictures/etc. silently disappearing without me noticing. Ideally, they would make the pic not available to other users, but still available to me, and notify me that my account will get banned in $N days. Ideally, during that phase there would be the option to pay some minor amount of money and ask to get a human moderator involved to review your case.


> Ideally, they would make the pic not available to other users, but still available to me, and notify me that my account will get banned in $N days. During that phase you might even pay some minor amount of money and ask to get a human moderator involved to review your case.

This would just become another revenue stream given enough time and changes of leadership around such a system. Treading a thin line on just how abusive they can be on the review process to extract money, it would be chock full of perverse incentives.


yup. and as soon it becomes a revenue stream, there is no incentive to prevent false flagging


> Apparently it's hard to just delete the offending pic

Now that would be censorship! /s


I agree - it’s modern day struggle sessions.

There are four lights.


It's even worse, "by deleting the pic, you agree with our decision that it was wrong.".

"Yes master, this video of a cat video game is revenge porn and I'm sorry for posting it!"...


It's not censorship when you delete your own content.


/s?


> Currently, our public squares are owned by private megacorporations

Here's the problem I have with this analogy. Is it analogous to a public square (as in public property, owned by the state, maintained by tax payer money)? Or is it a private property that allows people to congregate for free but puts up ads to raise money, and many consider it as good as a public square, because most people they like hang out here. And like any business, they choose to deny service to some users (and not all users agree on this). Also, there are multiple such private spaces.

Maybe the real solution is just like in the physical world, there should be state owned online spaces that are real "public spaces" and denying service here would be the govt denying rights.


See, I think you got into the area that I am troubled by.

I will state two things I consider to be true that are mutually exclusive:

* Online spaces (facebook, twitter, etc), when they reach a certain level, are de-facto public spaces and excluding people from it by big tech is harmful to humanity. * Online spaces owned by a private corporation is for their benefit, not humanities, and forcing them to keep others on them is an infringement of property rights.

... I am a human and I can simultaneously hold views that exclude each other. I wish I knew a way to make these consistent.


> Online spaces (facebook, twitter, etc), when they reach a certain level, are de-facto public spaces and excluding people from it by big tech is harmful to humanity.

Sure we can treat online spaces like utilities e.g. I dont know what the law is but something like denying water is not legal or you got to have good reason to do so.). And like content, water quality should be maintained to a standard (e.g. moderating scammers, bots, illegal/objectionable content etc.). But the big difference, the govt can set water quality standards but is very reluctant to set content moderation standards (for good reasons). So we end up with a situation, we want content moderation but its not a govt approved standard and so we end up with moderation that not everyone agrees with.


Part of the issue here is that we regard (not just legally, but even in how we talk) corporations as people. Or perhaps that we try to use the same rules for small-scale "ownership" as for vast almost monopolistic levels of control.

I think we should not regard property as a concept so highly. We don't need to paint this as a black/white nihilistic capitalism vs. naive communism issue either. But it sure would help if we'd be a little less fundamentalist when it came to corporations. And also if we didn't try to stretch the metaphor of "this is my pair of shoes!" all the way to "I get to own a significant chunk of everybody's communication!"

Corporations as entities should shield their owners and employees less absolutely, and have more restrictions on rights that are entirely divorced from those of their members than humans have. And we shouldn't assume ownership in-the-large needs to work _exactly_ the same as in the small.


The law deals with this all the time, but it becomes complex when you cross jurisdiction boundaries. If a corporation wants to operate a public space, like a common area within a company town, you generally have to follow the law on what speech is allowed in that space. If your square is in the US, you have to allow Nazis to protest, but if it's in Germany, you probably have a duty to call the police on the Nazis to stop them. What do you do?

Corporations often operate public squares to their benefit, so I have no problem with them having to follow local laws on what needs to happen in that public square. Google already personalizes your search results, and Facebook personalizes your feed. They can add local laws into that personalization algorithm.

Edit: Updated the example to be correct.


> If a corporation wants to operate a public space, like a square with a fountain outside an office building, you generally have to follow the law on what speech is allowed in that space. If your square is in the US, you have to allow Nazis to protest, but if it's in Germany, you probably have a duty to call the police on the Nazis to stop them.

This is not correct. [0]

> Private property owners can set rules for speech on their property. The government may not restrict your speech if it is taking place on your own property or with the consent of the property owner.

[0] https://www.aclu.org/know-your-rights/protesters-rights


> private property that allows people to congregate for free

People keep implying (and maybe you aren't and I am associating you with them unfairly) that we should be willing to deal with these issues because giving up our privacy/rights is a fair exchange for access to a platform. If people didn't give Facebook access to store and share content that they create, it wouldn't exist.


Worth pointing out that more life exists outside the corporate run stages. Part of their trick is to make you think that the only way of life is via their platforms.

I’m not and never have been a Twitter or or TikTok or Facebook member.


I also avoid those platforms but that also means self-exclusion from entire communities. I've seen large groups (DnD groups, gaming/sports clubs, obscure tech forums etc) being swallowed wholesale by Facebook, then Reddit and now Discord. That may not matter for a random game group but unfortunately Twitter is where most of the worlds journalists post and network.


Where I live now, most sports and hobby clubs coordinate solely via Facebook.

If you don't have a FB account (like myself, currently), you are shit out of luck when it comes to finding out what's going on where/when.


Meetup.com is still working out well for me. Also everyone knows everyone so you end up with a vast contacts network after a bit.


I don’t think it’s a trick at all, it’s actually much worse than being kicked out of a public square. Making YouTube videos or Tweets or Facebook posts can be your job and facilitate your entire form of income. You spend 10 years cultivating a following and if you say or do the wrong thing, you don’t just lose the public square, you basically get fired. Yea you can try to diversify but the average person doesn’t do this, they build something somewhere specific. These platforms are tremendously important, both for speech and for peoples livelihoods. Banning people and the appeals should be much more of a delicate process than currently. Right now these moderators are akin to police departments with no court system. One day we will enact laws that force companies of certain sizes to have an appeal process, or in some way dedicate resources to accurate appeals, or maybe just force companies to keep records of the ban and give users the right to sue for mistakes, similar to workers have for with employers. We almost need an entirely new public court system to manage that process so the burden isn’t on companies or users. I don’t have the answers but at some point consumer rights will be needed to stop abuse or lethargy in moderation of important decisions.


Sure, but getting hundreds or thousands of people to join your Mastodon instance or whatever is a huge barrier when Twitter works fine /most/ of the time.


Or, you know, in real life. Which was more my point. I've done more networking face to face over the last couple of years than I ever got done using the Internet.


> A public square, in the united states, has typically been an area where people can shout out their concerns or whatever they want to anyone who can listen and the government cannot stop it.

Growing up in Birmingham, AL I had a couple teachers who had dog bites and remembered the hoses from when they tried to shout out whatever they wanted in public squares. It's always interesting how people have such a rosy view of history when it comes to forming a much beloved but not well thought out belief.


They remember the dog bites because its cases where the government acted illegally and were taken to task.

Today, the governing actor of our virtual public squares are corporations - and a significant amount of people (maybe not you) seem to be ok with them taking what actions they wish.


> They remember the dog bites because its cases where the government acted illegally and were taken to task.

Uh, no. Society has whitewashed the history of the Civil Rights Movement, but popular sentiment at the time was not with Martin Luther King, Rosa Parks, the other freedom riders, etc.

Civilians and the government sometimes acted illegally, sure, but some of those actions were absolutely legal. (Not to mention that the government often leaned on citizens to take illegal vigilante action so that "the government" wouldn't have to - the distinction was not always clear cut).

And either way, it's hard to make the argument that they were "taken to task. Even after Brown v. Board, it took over a decade for many local governments (even in the North!) to be forced into compliance. And the actual individuals responsible for the illegal actions rarely faced any serious consequences.

EDIT: Clarified the point of legality.


The government acted legally. That was (and in a lot of cases still is) the problem. In fact most cities and local governments reserve the right to use hoses and dogs they just don't because it would be bad optics. As the saying goes "the more things change, the more they stay the same".


> The government acted legally. That was (and in a lot of cases still is) the problem.

Sure - I understood "dog bites and hoses" here to be a synedoche for the general government and civilian response to the civil rights protests, some of which was legal and some of which was not. Obviously I don't know the specifics of your teachers' particular stories.

It's easy to write off those atrocities with "that happened, but it was dealt with, and it's in the past now" - which, as you pointed out in the other comment, is a convenient lie. My main point was that these were not isolated incidents (as GP implied), and that - whether legal or not - those involved faced few consequences (if any) for their actions.


Understood and I agree. I always feel like it's important to stress that when those protests happened the government's response (by way of the police force) was considered well within it's right, in support of your main point.

It's a quibble but an important quibble imo.


It wasn't "illegal" for Bull Connor to use dogs and hoses. The problem was that it was legal and for most white people in the south, encouraged. Again this is another example of having a convenient lie ready made to fit a strongly held belief.


Look man is there some reason you presuppose bad intent and ascribe to a difference of opinion as a lie?

There was a time when wanting free speech was a good thing. It wasn’t that long ago.


Something being legal or not isn't a difference of opinion, lmao.


You are perpetuating a myth. Malcom Gladwell researched and presented this in his Podcast episode - The Foot Soldier of Birmingham. https://www.pushkin.fm/podcasts/revisionist-history/the-foot...


None of my teachers were Walter Gadsden and noine claimed to be the boy depicted in the statue.

The fact that you took the story of a photo and extrapolated it out to mean that anyone who said they were attacked by dogs is a fraud and are perpetuating a myth says a lot about you.

Also next time link a transcript to the episode than the actual episode when you want to use audio to prove you don't know what you're talking about.


It's better to face opposition like that than to be disappeared from the public square.


There are plenty of public places in the US, but the so-called "public square" where strangers gather to debate is a total myth. It does not exist anywhere, and I'm not sure it ever has existed. Certainly not in my lifetime, anywhere I've ever lived.

Friends do gather in public squares to talk or lounge or listen to concerts or whatever. But the kind of thing that happens on Twitter, or on HN for that matter, does not happen IRL. Occasionally there's a crazy preacher who attempts to address a crowd of strangers, but mostly the crazy preachers get ignored or ridiculed. There's no real, serious discussion and debate in the public squares among people who didn't already know each other.

You can go to an open city council meeting to discuss and debate public issues, but those meetings are strictly governed by rules and moderated.


It existed in ancient Greece at least


I have no first-hand knowledge of this, but if you have to go back 2500 years for an example... ;-)


Ancient towns were small enough, everyone likely knew each other.


I always wonder why not just rely on the same user reporting and automation systems to downrank, gray out, hide from indexing, mark content as controversial, instead of banning entire user accounts?

Then repeat offenders can be banned with good reason. It seems absurdly heavy handed to remove legitimate accounts, irreversibly and without recourse, for a single post or comment, this should not be legal.


I think your suggestion is better than the alternative we have now - but I think its not the right one either.

What if the majority of people strongly disagree with a minority groups statements or cries for help? What if the users join a mass flagging / downvoting / reporting campaign (either purposefully, or stand-alone-complex style) to exclude these users.

If the statement is "Maybe the majority is telling you they don't like you, asshole" then consider the 1950/1960's. I would state that a significant portion of the US was torn on whether a group should be treated the same as other groups. During that time, I am almost positive if we had this policy, the minority group would have been 'grayed out', 'hidden from indexing', 'marked content as controversial.'

Do I have a better suggestion? No. No I don't. I think yours is better than what we have now, but the tyranny of the majority is a thing.


> I would love to know a way this can be solved

Step 1. Hire humans and pay them enough to think (and communicate with customers).

Step 2. Keep an audit trail of events and communications related to these ban situations. Those can be useful when escalating a dispute (to another human from Step 1, but someone with more authority to make judgement calls).

Detection algorithms can still be used, but they should only be to flag potential bans. Any actual action taken must be done by a human from #1.

Done.


>Step 2. Keep an audit trail of events and communications related to these ban situations.

Hard to imagine these companies, whose whole value proposition is in the data they collect, are neglecting to collect this data.


I don't think they're collecing this type of data... because it almost doesn't exist. It's not like you have a back and forth communication with customer service (because there is no customer service group).

You just have the one or two dispute statements you may have been allowed to submit, and then you have the predictable automatic denial replies.

They probably do save the dispute claims. But that info is useless, both because it has no bearing on the outcome and because there's no oversight anyway.


>> I would love to know a way this can be solved as its painting a depressing picture.

It's possible that the answer is for society to simply go back in the other direction. Despite the initial promise of the Internet, it's possible it will never function as the new public square. Rather, we might have to go back to more traditional methods of expression, in physical spaces, or writing letters to elected officials.

Or perhaps a body of ethics could be developed similar to the body of ethics that guides journalists. But that body of ethics, for journalists, grew up at a time when the USA had many thousands of newspapers, and none were dominant, so the power of the profession was balanced against the relatively small power of any one newspaper. This was back before the New York Times and the Washington Post emerged as national papers. It would be more difficult to develop a similar body of ethics for social media, since the whole space has already consolidated down to 4 major corporations, and there is no obvious profession that can assert a body of ethics independently of those 4 companies.


The public square argument is a distraction and it goes nowhere. What needs to be called out is Twitter just being terrible at enforcing their own TOS, and the human review process apparently having no oversight or consequences for responses like the author received.

This is a straightforward problem of bad and inconsistent service quality.


And the only way to fix it is regulation — with teeth — of privately-owned public communications platforms. Does anyone see another way that could work?


> I would love to know a way this can be solved as its painting a depressing picture.

When a giant corporation says “here’s our private property, use it as your public square”, say “fuck off”.


There are many "public squares" in the world owned by corporations -- 592 in New York City alone including 168 plazas. They don't have the same protections for their users that a true public square has, but they're cheaper for the average voter, and that's what we care about.


Twitter is not the government? Twitter also is not really a public square, it's an advertiser-supported electronic press.

If I own a printing press, I have the freedom to print what I want, including what people send me, and I also can not print what I want, including what people send me. It's my press. Want different rules, get your own or use someone elses.

Because Twitter is a privately owned super-fast electronic press, administered at their whims, people should stop using it for things that matter unless they have an actual contract with the corporation. I don't rely on Twitter for anything other than entertainment.

The fact that people expect rights and meaning with advertiser-supported stuff given to them for free is what's harmful.


This metaphor also fails, in the existing regulatory framework. If you own a printing press and print something, you are liable for its contents. Twitter, under Section 230 of the Communications Decency Act, is not liable for content that its users post.


Now take this, and multiply it for Zuckerberg’s idea of a multiverse.


The thing with physical public squares is that they are self-regulated - if someone commits a crime, everyone runs away and someone calls the cops, and the offender effectively has their life ruined due to a criminal record. If someone goes around and starts yelling hate crimes at people on the street, they tend to learn their lesson when enough people take notice and confront the individual with a warning to stop.

Unless you bring in real identities onto twitter, none of those societal protections will apply to social media. Nobody can do anything to spam/bot accounts if it's 1000 accounts being created every minute.


> the online world is our current public square

That's right, and that's why virtually no one ever gets banned from using the internet. Twitter is not "the online world".


that’s like saying a car is just one method of transportation. so if you get banned from using cars you have no right to complain. but i can tell from your spastic angry (not to mention logically incoherent) post you are a twitter user who’s afraid of their echo chamber changing


> This is how big tech does damage to humans.

This being whatever AI flagged his post. I would not be surprised. I'm seeing a trend where, Facebook, Discord and I guess now Twitter are trying to use AI to moderate. It is going to backfire hard. Lots of blatantly bad things stay on their platform, but innocent nonsense gets you banned. Either their content moderators are really bad at their job, or silently protesting, or there's an AI involved.

In my case someone posted a video featuring a giant spider on Facebook in a little girls hand, it was bigger than her face, my comment was a meme comment you see all over the internet "girl put that spider down we gotta set the house on fire" there's a recurring meme about burning homes to kill spiders due to arachnophobia. Innocent joke, not actually serious, I got a warning, I guess they thought I was actually advocating burning down a home, but I was not. I appealed and immediately not long after I was denied. There's an AI involved, and whoever does appeals is lazy or the AI itself, or has no concept of internet content memes.


It can be solved by realizing that private websites ARE NOT public fora.

They have never been, are not now, and will never be.

HN has, and will again, removed comments that I have made again, particularly if they stray too far from the party line regarding cryptobros, SEObros, and devs who whore themselves out to ad companies when they could be doing actual real work that benefits humanity while making just a little teeny tiny less money but whores are whores so they'll keep on keeping on.

I'm fine with that. And you should be too.

If HN was a public square I could tell them to fuck off. Nothing I said would meet any community standards and obscenity triggers in any courtroom in any jurisdiction of any court in all of the United States.

"Oh but HN isn't as big as Twit.." is bullshit and has been tried before. Anyone who makes this argument is so intellectually bankrupt they should be banned from the internet for being too stupid to operate an internet browser.

You have the same "public square" rights in Times Square that you have in Lotsee Oklahoma (0.02 mi sq), which is smaller than Times Square (0.25 mi sq).

Private parties can set arbitrary, capricious, subjective, unfair, and mean rules about what they allow on their platforms, and don't even have to set any rules besides "because we said so".

Anything less is tyranny.

edit: if public websites are TRUELY public fora, there could be a very strong argument made that hiding my comment due to downvotes is an assault on my rights, similar to covering my sign with a blanket as I try to agitate in ACTUAL, REAL, public squares for civil rights or the legalization of marijuana or the proselytizing of my TIMECUBE religion. Unless, of course there exists "important and real" free speech and "not important" free speech but allowing for that distinction is probably more dangerous than having no free speech at all...


I had felt the same in 2015 when I got banned from Facebook for not providing my real data (they asked me to provide a photo of my ID which I thought and still think was silly). But then I realised how good it was for me (as a user) and started thinking maybe we should aim to go back to the 'small Internet'. By this I mean small forums, maybe chat rooms or something of that sort. I often go back in memories to the late 90's and remember how I felt using the Internet back then. Of course this is nostalgia, but sometimes I wonder if I can build something to get that feeling back.. like island of the 90's internet


The "smol Internet" movement is what you're looking for. Returning to smaller online communities using things like Gopher, Gemini, fediverse, pubnixes, etc.

https://thedorkweb.substack.com/p/gopher-gemini-and-the-smol...


I think about this too and know it’s mostly nostalgia too. I’m curious about community organized LANs and running older tech. I also concede that almost no one actually wants to run this shit so it seems very unlikely anyone would want it.


These megacorporations answer that you're using a 'private platform' - the answer is to use a public platform.

https://youtu.be/SkaaPcjKI2E


It should be noted that this is self-promotion.


One way to look at this is that the natural state of things is that everyone has access to the local public square, and only a few have access to regional or national public squares. The early internet was a perturbation in this natural state, giving millions access to nationaland global distribution of their thoughts for no cost, but natural law is bringing us back to a steady state where you can only reach your friends, colleagues, and neighbors unless you make a career out of journalism, celebrity, big business, or politics.


Twitter is not "the public square". It is a night club.

Confusing private, for-profit businesses for public assets does a disservice to, and leads to ideas that are damaging for, both.


The only night club.


Ironically this is how night clubs work, too. One gets a monopoly/gets popular and everyone goes there for a time until it because uncool and people move on to the next cool club. In social media see LiveJournal -> MySpace -> Facebook -> Tiktok. Each had a monopoly at the time until something cooler came along. The tenure also, ironically, seems to be around the same duration of a popular nightclub.

Also, don't mistake user bases for monoliths in this analogy, < 18 moved on from facebook a long time ago, > 50 may skip Tiktok entirely but they showed up years after ~18 year olds did so tenure is still relevant.


This is not a good take. First, it's not true that the most popular club has a monopoly in any meaningful sense. But even granting you that farcical premise: any "monopoly" effect is on a city-wide basis and does not scale, and does not trend towards global winner-take-all outcomes that social media does.


I'm not sure which night clubs you are going to, but "popular" ones are very much monopolies. Sure, the local bar that plays music isn't a monopoly, but neither is the "small" social network. You won't see John Mayer in a small night club and you also wont see him on a small social media platform. John creates lock-in for the other high status participants at the club, just like the content creators create lock-in for you being on social media (or HN in this case). I can walk next door, but I have a worse experience in both cases because they have a monopoly over a network is valuable.

Your second point of the global nature misses the point I was making. Social things evolve. Whether on a global scale, country scale, or city level. Assuming there is a social construct that will "just work" until the end of time seems naive. But hey, if we are all using Twitter and Facebook in 20-30 years rather than the new social media company that has come to prominence then it will be a provably wrong take :).

Or maybe a better way to put it is if the next generation is using them, because certain cohorts (see original post) will continue to use it because thats where their network is, but younger people get no utility because their network isnt there. See twitter and facebook.


> A perfectly pristine public square filled not with people, but consumers staring at advertising boards.

Just beautifully written. Do you happen to have a blog?


I only ever hear complaints from far-right people who want their speech to be free-er than other people’s or entities.


Where are the legal recourses? Are EULAs that ironclad? And should you lose all "your" data?

It should be a law that days bans from companies must allow you to download your data, especially snt Publix presence site.


>I would love to know a way this can be solved as its painting a depressing picture. I don't like how this makes me feel.

Easy. Stop banning people unless the content is illegal. Stop acting as moral police. Let the community police itself by burying offensive content. Give users absolute control over what they see. And ban bots.

This “problem” only exists because Twitter wants to control what you see.


The video in question seems to have been (incorrectly) flagged as either a revenge porn or upskirt video, which people have struggled to get taken down before the damage was done.


[flagged]


Forcing users to acknowledge their sins on a platform, when none are committed, is an issue. It reminds me of struggle sessions.

I will use an extreme case (unrealistic) to prove the point: If you were forced, every day, to say 'XXXXXX cereal company is the best in the world, and they are greeeat.' before you were allowed to eat your toast, you will eventually start to believe this. Ok, maybe not you - but humanity isn't as strong willed as you are perhaps.


Perhaps this is an unconstructive aside, but it's a pet peeve of mine how often we present this kind of thought experiments in a way that pretends it's just the unwashed masses that are vulnerable to whatever manipulation is being discussed.

I really doubt it. I'd rather we internalize that you and me are almost certainly also vulnerable to serious misjudgements when exposed to similar situations.

Best case: you're right, and strong-willed indivuduals exist and are immune. But worst case we're all worshiping Cthulhu without even realizing it, just because we were too proud to practice good informational hygiene.


>No meaningful damage was done to a human here. The author would get unsuspended immediately if they chose to delete the tweet, and knows it.

forcing a human to submit is absolutely damaging, more-so when the judgement is egregiously inhumane, incorrect, and final.


Clearly twitter is in the wrong here. But making a fundamentalist point about every minor issue you encounter isn't ideal either. I think it's entirely reasonable to expect people to pick their battles, rather than expect the whole world to organize around protecting their ego even in clearly irrelevant cases.

There's nothing wrong with criticizing the policies that caused this. But I have no illusions that there are any policies that would be perfect, nor that the tradeoffs here are trivial.

If everybody tried to be this fundamentalist, we'd still be stuck in caves fighting about whose corner is whose. There's a balance between avoiding erosion of sound principles, and being part of a large, constructive collective.

So this battle is fine as long as were clear what it's about: the principle; not the specific post. And as such: being forced to do _anything_ with respect to that specific post isn't worth quibbling about. It's a fine example to get people talking, and that's it. If the author is personally harmed by is sudden lack of access to twitter, then he should consider choosing not to fight on this detail. It's a cute cat video, not a whistleblower talking about grave misconduct or whatever.


The author is prevented from posting a cookie cutter video clip from a game, not even some kind of actual self-expression. That's not inhumane; it's a complete triviality. If you can't make the case for why Twitter is in the wrong without these competely absurd exaggerations about inhumane treatement and damaging humans, maybe you need to admit that you don't actually have a case.


These global public squares didn't exist until social media so I don't consider it a human right.


The previous non-existence of certain technology isn't enough on its own to make this determination. It's also necessary to look at the overall state of a society to tease out the true consequences of these kinds of issues. I think a consequentialist lens is helpful here.

If your local municipality started preventing people who had said objectionable things from crossing into the town via a public road, it would not follow that this is just fine because at one time, roads didn't exist.

If you take a step back, and look at it as an issue of freedom of association and/or free speech, things get a lot murkier. If an entire society morphs itself to use those private entities for public discourse, at some point it almost doesn't matter that the entity is private, if the ultimate outcome is a serious violation of one's rights to associate/speak.

This is at odds with the expectations a private company rightly has about its ability to do business with whom they please, and this is why the issue is so contentious.

Neither side of this issue stands on solid ground.


I've said it elsewhere in hnn today but I can say it again - I simultaneously think you are right and wrong.

* These global services are de-facto public squares and forcing people out of them is to the detriment of humanity.

* Private corporations shouldn't have their property stolen/appropriated by calling them public squares.

I am a human and that lets me hold contradictory thoughts. I don't know a way to fix this.


Utilities have to deal with this contradiction, they build their critical services on public land.


That's a real issue. It's not just a monopoly in a competitive sense but also a monopoly on 'public squares'. Which is effectively against free speech.

When a company or moderator has the power to ban you from a public space your are ostracized by a non-democratic panel, with unknown agendas.


I am so tired of the public square analogy. You don’t need Twitter. You don’t need Facebook. You don’t need Reddit. And what’s more, you can still read them if you’re banned.

If you think it’s ok for a restaurant to turn you away because of a dress code, then this is a non-issue. The are several major platforms you can use as a megaphone, just like you have plenty of restaurants to choose from for your meal. If you are systematically getting banned from each social media platform and being “cut off from the public square,” that says more about you than it does the current state of speech.


That’s a very naive argument. If your friends and family are on Facebook, your work contacts across the country are on Twitter, the only forums left alive on the web are on Reddit, how can you not need them?

These things have replaced existing connections you used to have via other channels. You can not step away without severing them now, it is no longer a matter of personal choice.


> If your friends and family are on Facebook, your work contacts across the country are on Twitter, the only forums left alive on the web are on Reddit, how can you not need them?

You can absolutely step away from them and it’s not even hard. Call, text or email your friends and family. It’s far more intimate and frankly more satisfying. You lose the ability to broadcast a curated persona, but honestly in the grand scheme of things that’s better for both you and the rest of humanity.


I did this for years. I regret it. It's overly simplistic to think this works well. The devil is in the details; and being a decent human being also means adapting to other's wishes, needs and habits. Being a small-scale dictator about modes of communication isn't great.

You _will_ miss out of valuable personal communication if you choose to go this route. It's not just about an online curated persona.


I've been doing this since forever, and I don't feel I miss out on anything meaningful. Granted, I've never had that many friends to begin with, but the ones I do have haven't given me any trouble about not being on social media or whatever. In my experience these online platforms cater to shallow communications of dubious value.


> Being a small-scale dictator about modes of communication isn't great.

Oh come now, isn’t this a little dramatic?

If I was forcing them to use one app I want in order to stay in touch you’d have a point, but isn’t that also functionally what you’re saying we should have to deal with? Meeting someone all the way, only on ONE mode of communication they demand?

This also doesn’t address how niche this situation is. There can’t be that many people who can only talk to their family on Facebook messenger. This can’t possibly be a major issue.


It's not just about family; it's about friends too, and perhaps the periphery of extended family. Losing contact is a price. Is it worth paying?

That'll vary from person to person, but my personal experience with this was that this was a mistake I now regret. I incidentally also avoided whatsapp and other facebook-owned apps, which certainly didn't help. And some people are probably naturally great at collecting other contact details and keeping in touch via phone and email; I'm not.

I stand behind the principle, mind you; I just think I should have placed day-to-day human beings I needlessly lost contact with above that principle. I still encourage contacts to avoid facebook, but that's it. But to each their own, for sure.

I'm simply saying this because in a forum such as this one that likes to focus on the abstract and principled it's easy to overlook the plain and quotidian daily existence we all have too; we shouldn't.


“These things have replaced existing connections”

I hear this now and then and it puzzles me. All my personal online and business communication is through email, with some friends and groups on WhatsApp and Telegram. People sometimes talk to me on Twitter but, if I want to talk to them, I steer them to email. I seems really risky, as we see in so many incidents reported here, to actually rely on platforms like Twitter for maintaining connections.


For users who are banned from these services, they have to do just what you say.

Lets not confuse ourselves though - the vast majority of America does not feel the way you and I do. My kiddos swim team is on facebook, thats how we track and sign up for service, and if we lose our access we are excluded. Asking everyone on the team to move to email is a nonstarter.


Almost nobody in my circles uses email for personal conversations anymore. People don’t share their email when you meet at a conference. You don’t join the X tech discussion group over email (anymore).

This also implies only talking to people you already know, no opportunities to join or start communities.

And that’s still besides the point: just imagine that your email account is cancelled tomorrow because they don’t like something you said in a mailing list.


I don’t doubt what you’re saying about your circles. I live in a different universe. If someone only communicates through Twitter, etc., that’s a useful filter for me. I don’t have a Facebook account.

“only talking to people you already know”

Far from it. For example, the contact information for an author on a scientific paper often includes an email address.


In what world are people only able to communicate with their family and friends on Facebook?

Who only communicates with colleagues over Twitter?


In the world we live in now.

I knew a guy who was having serious financial trouble, and couldn’t afford to keep a phone line active. He relied on Facebook to stay in contact with his wife and son because he at least had access to WiFi.

You can make a similar argument about phones. People can just reach each other via mail, or physically visit each other…but this doesn’t exactly constitute a healthy state of interaction.

Use email you say! Almost every big email provider requires verification via phone or another email address these days.

It’s legitimately getting harder and harder to communicate in these other ways.

> Who only communicates with colleagues over Twitter?

I don’t use Twitter much these days, but for a period of time, it was by far the easiest way to get/stay in touch with the communities I was organizing. I did dev advocacy, and if I ran a meet up somewhere, I didn’t want to just hand my phone number to a room full of people, but did want the option to stay in contact.

Could I have found another solution? Yeah sure. But at the time, this seems pretty reasonable, and Twitter wasn’t yet banning people for inane things.

If I lost that account now, it’d sever an entire community. Could I work around that? Sure.

But downplaying the significance of these things seems short sighted.

I rarely touch social media these days for many other reasons, but that is definitely an isolating choice to make in 2022.


Unless your family counts two you cannot possibly compare the time facetiming each one of them vs just scrolling an aggregated news feed


Phone call, email, text message, Viber, WhatsApp, Instagram, Twitter, Snapchat, Telegram, actually interacting in person.

Half my family isn’t even on Facebook and virtually none of us even use it anymore. If that’s the only way to keep up with family then don’t act like a complete jerk online, or you know don’t harass people, and you’ll be fine. Same as regular society.

Contrary to popular belief, it actually takes a lot to simply get banned. Yeah there are cases like the post above, but the reason it is even here is because it is noteworthy.


It does not take a lot to get banned - you just have to have a stated opinion that does not correlate with a large portion of the user base.

If the civil rights movement was being organized on facebook today with the culture and beliefs of the 1950's as our users, I am very confident MLK would have been banned from every service.

... and someone, somewhere, would say 'don't act like a complete jerk online, or you know don't harass people, and you'll be fine'.

I don't mean this pointed at you, I completely get the feeling, but I think ignoring history and the current in/out groups is likely to lead to conclusions that are not realistic.


This is an accusation by American conservatives that has little to no basis in reality. It’s just an easy source of fake outrage they don’t have to back up. You won’t get banned for “wrong think.” Look how long it took Alex Jones to get de-platformed - the dude was literally starting witch hunts with his accounts.

Look, I will be the first to say we need to rein in social media companies. Their ability to shrug and say “oh well!” with no repercussions as their vaunted algorithms (mis)handle the impossibly large scale of their businesses is wrong. No other industry gets away with so much except maybe religious groups and oil/gas.

That being said, we should not conflate it with the incorrect and frankly dishonest cries of “censorship” by the right.

Related note: I think you’ll find this Twitter bot very interesting: https://twitter.com/facebookstop10/status/154978628117712896...


If you get banned, just create another account and then don't do the thing that got you banned.

It's really not that difficult.


Banned users creating new accounts is not a feature, its a 'bug' that the current platforms are trying to fix.

Ever create a new facebook account? Even as a flesh and blood human being? Lots of folks have tried to do this for oculus and a great number of them were banned. It made the news on HNN. Facebook is fixing this bug now, and the other sites are trying to do the same.

Eventually, when the big tech companies have 'fixed the bug' and your ban is - in fact - permanent and for life. . . what then?

"Don't do bad things" - a simple question... if the 50's/60's civil rights movement was done today on facebook, would MLK have been banned from twitter/facebook/instagram? I think he would be.

If you know a way to fix this, let me know. I think we - as humans - need to fix this sooner rather than later.


If it doesn't require you to prove your identity with a gov issued document then you'll be able to create a new account. My FB account certainly doesn't have my real name.

If it does require you to do so then run away from that platform very fast.

Yeah, a Million Man Meme is going to be so effective ... BLM & Antfi seem to have no problems organising themselves anyway and apparently they are practically terrorists according to the alt-right.

If you want a way to "fix it" then vote for someone who will bring in regulation to crack down on the powers that tech companies have.

And if you don't want to do that because you're a libertarian fan of light touch regulation of business, then you probably need to have word with yourself.


But I "need" that restaurant because I am hungry, it's only a block away, I really love the pasta they cook, and the bartender I enjoy talking to only works at this particular restaurant :(


They're used to organise the social graph, so the analogy is off but not actually wrong in spirit. Yes, you can gather without them, but organising without them is what's difficult because again, they now nearly monopolise the social graph of acquaintances in terms of organisation (e.g. Facebook groups for a hobby, political protests where most people don't know each other IRL). This wasn't inevitable, but the analogy to me seems to be like saying: you don't need to drive when the country decided to only build roads.

As for your last sentence, well, not necessarily. I just don't see how that logically follows. Bans tell you that you're violating a rule, but those rules can be terrible. That's true of anything ever, I don't see how you can state that it's always the fault of the person being cut off. That's like saying all social ostracisation is really the fault of the person being ostracised. The person who is unpopular might be a terrible person, or they might be right but with uncomfortable facts or opinions.


> And what’s more, you can still read them if you’re banned.

That is changing and in many places already no longer true.


Such as?


Also FB private groups that may be of interest to the user.

For example many football club-related groups are private for obvious reasons, and, as such, getting kicked out of FB also means getting kicked out of your local supporters' forum where the online discussions happen. Yes, people from that group also meet irl, but not all of them attend those meetings and those meetings happen, at most, once every week (after the football season starts). For example we're out-season now but this is also the time of some of the most heated and interesting discussions, because right now we're in the middle of the transfer season.


Twitter throws up a jarring sign-in wall if you scroll down more than a few tweets now.


Instagram.


IG has always hobbled the browser version and always restricted access without an account.

It’s also pretty easy to just set up a new account on a different machine.


> You don’t need Facebook.

There are countries that use Facebook for everything, including business and official announcements. If you are not on Facebook then you are left behind. Mostly just because it is available and "free", I think.

People should not use these platforms as their "public square", because their "public square" then becomes hostile territory, but people don't listen to us and indeed use these services as "public squares".

I think that the only way to reverse this trend is regulation that will most likely come when societies finally start to realize the damage that has been caused by moving their communication to computers that are owned by a couple of Californian companies.


Every time I read about these dystopian deplatforming events on Hacker News, my first reaction is to write a comment on how we could find our way out of this. Then the term "Web3" comes up in my mind, and then I think "Oh now, this is HN, they will burn you alive" and I turn to other things again.

But right now, for some reason I feel so strongly about, I'll give it another try.

Wouldn't it be great if we could tackle two issues with in one go? The annoying interfaces of Twitter/FB/Insta and co and the constant fear of being deplatformed? By publishing our stuff on a decentralized database that is open for anyone to read in whatever way they like?

I fight hard to read the web in my way. When I see a link to Twitter, I manually change it to Nitter. I use an ad blocker. Bookmarklets to remove sticke elements from the pages I visit.

I try hard to publish my stuff, so it cannot be deplatformed easily. Preferably on my own domain.

But it's all crutches.

The web I read still feels aggressive, annoying, mean, dangerous, exhausting.

The most important platforms I write to (where people can like and comment) give me the feeling of walking on thin ice. Every moment, me and my audience might be separated.

I really long for a future where content is published in a freely available database. Free to read from and free to write to. Where a post is a content addressable piece of data signed by the author. And comment too. And where a like is a cryptographically signed message, "I like this /joe". And where the order of your "feed" is not adjusted by outside forces.


You want to push content in a decentralised fashion? Run your own website, and provide an rss or atom feed so people can use their own aggregators. We've had this tech since the 90s.

It involves effort, though - not much effort, but a little more than posting on Twitter, Tumblr or wherever; and this, it turns out, is sufficient.

> I fight hard to read the web in my way.

Turns out most people posting content don't want to fight hard to post it, and most people reading content don't want to fight hard to read it, and this is why twitter, facebook et al are big things, rather than everyone just having their own sites and decentralised syndication.

If something else is to take twitter's place, its barriers to entry must be at least as low as twitter's.


It's getting harder to "run your own website" without the risk of the hosting provider pulling you down if you generate too many complaints and are too much trouble for them.


Not really. Kiwi Farms, Encyclopedia Dramatica, etc. are all online; pickup artist forums; satan worshipping forums, whatever, they’re all out there.

Remember most hosting providers make a living from customers, and generally don’t pull things down unless it’s super sketchy under their local laws. Dreamhost, Tucows etc. had a reputation for years for being very liberal with their hosting. Pretty much anything goes that’s not blatantly illegal.

And for the many sites with controversy, they probably don’t use a hosting provider.

People used to run entire forums like HN (smaller scale) on their college dorm PC; this is how Slashdot started. It’s not that hard if a college sophomore can do it. As you scale, you probably move to an actual house or building and need more servers and get a leased line from an ISP.

Yes people can appeal to the internet provider(s) but they rarely act. Remember, Stormfront was self hosted for ~25 years and only got torn down because of Charlottesville. Turns out if you actually do things (rather than just talk) with shitty ideas, at least in the USA, this crosses a line.


Is it? Are you planning on running your own Daily Stormer or 8chan?

It's not a slippery slope, it's not harder to run your own website. Don't extrapolate from a few extremist cases.


> Is it? Are you planning on running your own Daily Stormer or 8chan?

...and if you are find out what hosting services they use and use one of them.


I've seen people that had contrary, skeptical, positions on Sudden Onset Gender Dysphoria get deplatformed by hosting providers.


Good.


I disagree with you. According to you, that means I shouldn't be able to speak.


> I really long for a future where content is published in a freely available database. Free to read from and free to write to. Where a post is a content addressable piece of data signed by the author. And comment too. And where a like is a cryptographically signed message, "I like this /joe". And where the order of your "feed" is not adjusted by outside forces.

Spammers and griefers historically conquer that kind of environment. Usenet fulfilled virtually all of your freeness criteria and now it's a cesspool of spam and worse. Everyone worth talking to has left. Moderation is necessary for quality discussion, or selective membership, or more likely both.

You can make a Merkle tree of signed comments but the problem was never the integrity of the messages (I don't recall anyone messing with the contents of existing Usenet messages) or relationships between them (the worst practical problem was bottom-quoting).

The order of a "feed" is adjusted by the rate at which spam is generated; spam will always be the first result in the feed because that is spammers' objective and they have automated resources to achieve it.


Usenet does not have the data needed to filter spam. A system of cryptographically signed follows and likes would.

My client could calculate a "karma" value for each post. How many of my friends follow the poster? How many of my friends friends? How many of my friends friends friends? How many of my friends liked the post? How many of my friends friends. And so on.


This sounds very similar to PGP if everyone had chosen to use it comprehensively for every conversation. Adoption, proper key management, and ease of use were the killers there.

Social karma style scores are trumped by giant clusters of spambots building up a huge collective karma and then infiltrating existing networks at the edges by phishing or buying existing accounts/keys. Also, some people turn nutjob and then you need a way to go back and revoke all your permanently-recorded cryptographically signed likes and follows because anything less would allow censorship. Not to mention the cost of re-running pagerank over the entire internet population on every client for every new (un)like/follow. People who abandon their keys leave permanent positive karma sitting around for exploitation. If karma is only positive then pure abusers have no negative signal and can coast on these abandoned karma edges. If you allow negative signals (publicly signed "block"s, "this is spam" judgements, etc) then spammers have a new weapon to kill off every other high-karma key with negative edges to reduce its spam-fighting effectiveness. Don't forget that this kind of battle will also be waged by otherwise highly-rational political actors whose survival depends on curating high karma keys and cliques to bolster then.

What works is human moderation in highly functional online communities. Read the excellent posts by dang here about his ethos and practices. Human communities are too complex for algorithms or models to manage, at least so far.


Why does it need to be "web3" and not "RSS"?


How do you prevent someone from copying your RSS and making it their RSS? Where is the proof that you published it first?

How do you show that you have many followers when you publish via RSS?

How do you "like" something you read via RSS?

How do you comment on it?

How do you like and comment in a way that proves your identity?

How do you prove to the world that your identity is important - aka that you have lots of followers and likes?

These are the bits and pieces the digital social fabric is made of. Currently it is all owned by companies.


> How do you prevent someone from copying your RSS and making it their RSS? Where is the proof that you published it first?

How do you prevent someone from minting their own NFT using your NFT's underlying image file? You could rely on a third-party centralized search engine which looks at the actual content to determine which one appeared first, and the same would apply to RSS.


Many people are laughing at the whole NFT thing right now but I suspect that in 10-15 years people will be laughing even harder when the whole thing collapses.

Yeah you got stuff in a blockchain, but the "stuff" you put in there is a mere link to a service which might not survive so long (most services don't).

So yeah in 10-15 years you'll probably have a lot of former nft owners that only own a broken link (or possibly even less).


Bold thinking NFTs will make it more than 5 years


NFTs will be around forever. They'll be mocked forever too.


I was being generous


Try harder, look other uses cases of NFTs other than "broken link". You speak with so much confidence without knowing what you are talking about.


"broken link" is indeed not the only problem with NFTs, but I'm keen to hear your perspective on NFT use-cases.


ENS (web3 domains), mirror.xyz (publishing), shibuya.xyz (video content), sunflower land (gamefi), membership nfts (bayc). There are more used cases that are experimental. But "broken links" is why I do not take HN users seriously about web3.


> membership nfts (bayc)

I have a problem with this one - the membership is supposed to give you access to events/etc hosted by BAYC, so in this case they are the trusted party and can run a database. The blockchain doesn't seem like it adds anything here because ultimately BAYC can choose to deny entry for any reason.

Furthermore, how do they handle the case where a membership NFT is stolen or acquired illegitimately? Do they still honor the stolen NFT and let the thief in? Do they blacklist the NFT and re-mint a new one to give to the original owner? Etc.

Interaction with real-world state is where all blockchain-based projects break down. Blockchains can only enforce their guarantees on the chain itself - replicating it to the real world requires a trusted party, at which point a lot of the blockchain's perks no longer apply and a database starts making more sense.


> I have a problem with this one - the membership is supposed to give you access to events/etc hosted by BAYC, so in this case they are the trusted party and can run a database.

Ethereum blockchain is the database, BAYC can't delete, modify or censor any BAYC users.

> Furthermore, how do they handle the case where a membership NFT is stolen or acquired illegitimately? Do they still honor the stolen NFT and let the thief in? Do they blacklist the NFT and re-mint a new one to give to the original owner? Etc.

That's a good question and it already happened, I do not have an answer for that.

> at which point a lot of the blockchain's perks no longer apply and a database starts making more sense.

Getting a BAYC is as simple as transfering an NFT from one wallet to another, mint it with a wallet on BAYC website or buy it on a centralized or decentralized market place, anywhere in the world. There is no database with this type of frictionless.


> Ethereum blockchain is the database, BAYC can't delete, modify or censor any BAYC users.

But why does it matter what the blockchain says? The bouncer hired by BAYC to stand at the door of their event can "bounce" you regardless. May as well save resources & complexity by having them run a DB since they can ignore what the blockchain says anyway.

> Getting a BAYC is as simple as transfering an NFT from one wallet to another, mint it with a wallet on BAYC website or buy it on a centralized or decentralized market place, anywhere in the world. There is no database with this type of frictionless.

Getting a brand new BAYC involves "minting it" on their website - no different than buying a ticket on Ticketmaster/etc, and I'd argue the Ticketmaster route is easier as you don't have to worry about safeguarding a wallet/installing Metamask/etc.

Getting a BAYC off someone else involves an Ethereum transaction with its associated fees - in practice those are usually mediated by a marketplace such as OpenSea so decentralization/etc goes away. There's no reason the BAYC website couldn't just manage those transfers directly and skip all of the complexity. There's a theoretical advantage where because it all happens on the blockchain you could transfer an NFT without any third-party involvement (depending on how the smart contract is set up, but I'm assuming good faith and no artificial restrictions preventing that) but how many people do this in practice?

Considering the true value of a BAYC is to get access to their exclusive events (I am ignoring the temporary speculation aspect which is is now on its way out), I still don't see what advantages the blockchain brings compared to them just running an old-school members' club with memberships in a DB, since in practice they can deny entry to the event to anyone regardless of what the blockchain says. If they want to make memberships transferrable they can trivially do so on their platform, since in practice most NFT transfers right now happen on a centralized marketplace anyway.


I think there is a form of exhaustion regarding how crypto and Web 3 is supposed to resolve many things that seem to be working just well enough.

I understand that innovations do happen but the complexities (like smart contracts), the limitations of an append only database (that only store transactions and signatures), the damage done to our environment (CO2), the impact on electronic availability (video card shortages), the criminal uses of crypto and the life savings lost to a gambling like international market does make you wonder if this is all worth it.

I’m open to be proven wrong but I personally think that web3 and crypto, while very interesting, don’t have the attributes required to be a worthy innovation path for humankind


> the criminal uses of crypto

The criminal uses cases on traditional bank are still not solved. With any open source technology you can not block criminals of using it.


Web3 domains, lollerplex.

It took 25 years to start seeing some adoption in the ipv4 to ipv6 transition, and we’re nowhere close to finalising said transition…

if you think dns is going away because of the blockchain then i can start laughing way sooner.


ENS is not for replacing DNS, apples and oranges.


The strangest thing about your list, at least to me, is so much focus on "identity". Firstly, why does this even matter? And additionally, in which way does blockchain demonstrate "identity" any more than a domain name or user account? It demonstrates control over a mechanism, sure, but absolutely not who someone is

For instance, we are all commenting here without "proving" our identity in any way, and it's totally fine. And this is even a far more serious, business-centered discussion context than most of the Internet!


These are all solved problems (for decades) except for the one about proving you're important. Is that really the end goal of web3?


But a true web3 is possible: not through blockchain and things like that, but through things like WebRTC, Bittorrent and protocols of that sort. Peertube is one example of something of that sort.

A revival of a noncommercial, and uncommercializable web.


Are you young? Because, we once had the web3 you're describing about. Back when P2P was a trending technology, we implemented most of the web on peer to peer mesh network. File sharing, forum, web pages, we had it all. The conclusion is, it was so inefficient most people don't want to use it. That's why people of today use Hacker News rather than P2P forum, trusting the central authority that is Y Combinator.

Are you regularly using these P2P mesh network based communication tools? If not, you won't see the web3 future you are looking for. You don't get a future you don't want to use it regularly.


Are you old? You think it is easy to post on Instagram? Haha! Kids go to insane lengths to optimize their reach. They would swim through a lake of piss and puke for a popular post. I now some who sit all evening and "follow, unfollow, like and comment" stuff they are not interested in at all. It's a drag. It's work. It's exhausting. And everybody hates Instagram.

Why do they do it?

Because on Instagram, you can earn social currency: Likes and follows. A currency that gives you an advantage in the real world.

You can't do that with the technologies you mention.

But one could do it even better if likes and follows were cryptographically signed messages. Then a like from a celebrity is something that nobody can ever take away from you again.


This reads to me like “GE made the EV1 in the 90s and it sucked, you’re so silly for thinking you can make electric cars work now”.


Back in those days, we didn't have cheap, powerful lithium-ion battery or hub motor.

There is no equivalent technological break through on network performance.


Define “performance”?

Depending on how you define it I think it either isn’t the causal reason for the lack of success, or has actually 10xed. Batteries were more of an ease than a step function as well.


> ...By publishing our stuff on a decentralized database that is open for anyone to read in whatever way they like?

1. Isn't it how web1 work? 2. Who's going to pay for the storage and bandwidth? 3. As long as you hare hosting something on planet earth, the server is bounded to be governed by a country. And I am sorry to say, if a piece of content is considered illegal to the government of the country where the server is physically located, they can shut it down, no?


You can use a redirector extension to change twitter to nitter links.

I don't think web3 solves any of the problems you're mentioning in a real world way, which is probably (a part of) why you get strong reactions here.


There are a few of these platforms now, like hive, lens protocol, member.cash or deso.


> Overnight, I received a response that my appeal had been denied. So a human being, someone who works at Twitter dot com, looked at that video, looked back at the rule it was breaking, looked once again at the video, and went “Yeah, this all checks out.”

Of course not. "Appeals" are certainly not reviewed by human beings; maybe by another AI, or more probably by the same AI with higher thresholds. This is all automated stupidity.

Twitter does have a "bot problem", except that the bots are inside, not outside.


Twitter is an abysmal shithole of bad software engineering and bad webshit policy, which seems to be a requirement to be a website that serves public utility. I also so happened to rant about idiot moderation policies just 2 days ago: https://news.ycombinator.com/item?id=32174538

I can write this rant every day.

Also, this is the first thing that comes to mind when hearing "AI moderation". What we are seeing here is just hype and clout around the new AI boom and so idiot managers and consumers are demanding it regardless of whether it will be well engineered and put into an eligible application.


George Orwell's 1984? Aldous Huxley's Brave New World? Nah, we're headed toward Terry Gilliam's Brazil:

"Mistake? Haha. We don't make mistakes."

https://youtu.be/wzFmPFLIH5s

(Or maybe Mike Judge's Idiocracy, but I'd wager Brazil.)


I've always thought that was the most plausible of the dystopias, and it's the most terrifying, because there's literally no way of avoiding becoming a target of the state (or whoever it might be assuming state-like functions).


What i find surprising is people willing to trust a random person saying they got banned without elements to prove it

I find this very suspicious, why not show what twitter told you? that seems to be the first thing someone would share when they complain about getting banned

I'll go with "trying to surf on the Stray hype to get clicks" until proper more details are shared


I suppose it’s possible someone would make this up, but having dealt several times with erroneous content removal / account blocking from FANG companies, the story rings true.

E.g. I’m currently banned from Google Ads because they “detected suspicious payments” without ever trying to charge my card (the same card I use without issue for GCP and Google Workspace). They’ve denied three appeals without any information and still won’t tell me how to fix it. I have a feeling I’ll have to write one of these blog posts to make anything happen.


The full video that supposedly causes the auto-ban is in the article, so I decided to just try it:

https://i.imgur.com/WEP4dcV.png

I assume "additional tasks" means needing to delete the tweet to be unsuspended, as the author described.


Now that's interesting


The author got banned for "revenge porn". I guess it's their fault for putting another pussy up on Twitter.


A. A human _was_ involved and B. His issue was resolved within 24 hours: https://twitter.com/imranzomg/status/1549914737781063680

Like, come on. I agree that the processes and moderation models could be better, especially at the appeals phase, but we don't see the other side of the battle twitter content moderation is fighting.

Even if their doing a stellar job statistics-wise, we probably wouldn't know because PR wouldn't want a blog post about how a twitter hovers on the brink of becoming a cesspool.

I'm less sympathetic when inhumane moderation policies ruin lives as we see a lot of with google properties (YT, gmail), but this instance is pretty far from such a case.


"We don't see the other side" is perhaps literally the biggest problem.

I will have sympathy for this process when they choose to tell us how it all works, clearly and plainly. Otherwise, let the metaphorical rock-throwing continue.


What are you arguing here? That twitter is good at moderation because we simply can't see it behind the scenes?

I can think of hundred different ways this could have affected someone negatively, especially if they weren't able to speak with support in native language.

So c'mon, there is no excuse for this ban to take place, and then continue to be banned after the first appeal, statistics or not.


My primary points are that 24h turn around to get the issue resolved is not bad, and that blaming the AI is silly when the human in the loop didn't fix anything.

But yes, I am also saying we don't know the reality of what the content moderation team is dealing with, we basically only see the false positives. I'm not saying they're doing a good job (especially the team handling appeals), I'm saying that we should be more sympathetic to the trade-offs involved in automated moderation.


So, if you don't know the reality, why pretend or even give a shadow of a doubt of good faith? You've lost me man. They can do more to be transparent, but they don't. This is not a technological blocker, its a deliberate decision. Don't act like they can't show the reality of moderation, they simply choose not to. We already established in this scenario they chose to do nothing after manual review by a human. What more evidence of failure do you need. It's not working, period.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: