It's easy to understand: people in most companies don't appreciate having their ads next to porn, extremist content, or violent content. Especially so when it's posted as a reply to the ads.
And maybe people don’t appreciate this but on Twitter, brands are totally dependent on Twitter HQ to moderate this content. That is because all the replies are tweets as well, and you can’t delete other people’s tweets.
On Facebook, in comparison, you can delete nasty comments on a post by your brand. Brand safety is far more in your own hands there. I think this is true on LinkedIn too, although I’m not certain. In general it is a much “cleaner” place because of the focus on real names and career content.
Pedos and prostitutes are fine, but you wouldn't want your ads for shaving cream to be next to some guy questioning the immigration or covid policies of his country /s
The part about them not liking is the obvious one but there's nothing about why they don't like it. Is it something like not liking pineapple on pizza? is it like not liking extremely muscular human body? Or does it actually have some kind of logic?
Because if it has logic we can reason about it.
I know for a fact that all those brands actually do advertisement on some websites where horrible stuff are discussed.
Look at who advertises on 4chan. Elon is taking the platform that direction. That’s who will be all that’s left if he doesn’t figure out content moderation and get his ads team and software fixed ASAP.
They likely had tools to make certain that advertisers that objected didn't show up next to new viral meme (so that Procter & Gamble didn't show up next to anyone who had the hashtag #tidepodchallenge).
They also likely had tools to say "this screen shot from one of our testers put our content next to {objectionable figure} - make sure that this doesn't happen again" for advertisers to contact their account managers and make it happen.
The account managers were likely quite responsive if {jewish owned company advertising / verified} said that they were getting people replying back with antisemitic responses when customers were asking for support.
The advertisers had someone to contact and make things right - and were ok with that.
> Getting word that a large number of number of Twitter contractors were just laid off this afternoon with no notice, both in the US and abroad. Functions affected appear to include content moderation, real estate, and marketing, among others
Note the "content moderation" and "marketing" categories of employees.
So it's not content moderation to enforce rules and such of the system, just content moderation to apease advertisers.
That makes so much more sense now why there's so much content that gets reported and stays online and you get responses that there's no violation despite it being clear violation...
I've reported so many tweets over the last couple of years, where people are threatening others with violence, posting graphic videos of animals being killed or videos from the Ukraine war full of overly graphic content and I just get replies that there is no violation and I just assume that it's an automated response unless many people report the same content.
It's a mostly human process, and the humans that pay money are the ones most likely to be heard first.
Many social media systems experimented with a purely automated system and had difficulty with automated systems that likewise reported everything that they didn't like and that resulted in the content creators getting banned for non-reasons.
This leads to needing to having a human check things - and humans don't scale.
Look at the stories of Youtube content moderation - both the humans involved and the false positives from when humans aren't involved - for examples of "why we can't have nice things."
They had figured it out to the extent that there was moderation. I’m not saying the old approach was the right one, but brands need to understand what will and won’t show up next to their ads. Right now, could be almost anything.
One example is when everday people see your ad and constantly see replies below it about The Jews, they can form unconscious relationships about antisemetism and your brand. Why would you risk this as a brand? You are paying a company serious money to promote your company the way you want it promoted.
People will screenshot the ad, and post complaints about it on Twitter. Asking why brand XYZ supports <abhorrent thing> and calling for boycotts. Surely that connects the dots well enough to see why a company would not want this?