The recent youtube purges provide an example of this mechanic in action. In order to prevent "discrimination, segregation or exclusion" of certain groups of people, youtube must discriminate, segregate and exclude certain groups of people.
We don't make peace with our friends, we make peace with our enemies.
Attention can be gotten with conflict but there is a cliff-like point after which it becomes counterproductive for ad revenue — see advertisers pulling out of YouTube (or some Fox shows for that matter) over not wanting their ads to show next to incendiary content
Where? Where can I see advertisers pulling out of YouTube? I heard all about the "adpocalypse" but I waited and, sure enough, YouTube's revenue from advertising grew by leaps and bounds during the time period of the "adpocalypse" when they were claiming that companies were leaving. It was, as far as the actual numbers reported in Alphabet's earnings, a total fabrication or at least extreme hyperbole. There simply never was a meaningful amount of advertisers pulling their ads.
"Where can I see advertisers pulling out of YouTube?"
Consider a situation in which you are selling a gadget, or software.
Do you want your ad run up next to a guy using the fgt term to describe other people? Even in 'good spirits' or even 'in context'?
Maybe that's too abstract, given that you perhaps might not be selling stuff.
I do.
Frankly, I think it's all overblown, I think people should speak their minds and we should all be tolerant.
That said I don't want my products anywhere near your controversy. No, way. We put a lot of effort in what we do, we're trying to communicate a message, we 100% do not want that anywhere near our message.
When we get up and go to work and try to do our jobs, a lot of this stuff seems academic and ridiculous. For the same reason we don't want our ads next to porn, we don't want it near ugly language. I love stand up comedians, but much of their (live) work is too off colour for many advertisers.
So, the decision by YouTub etc to pull ads, but allow the content to remain is actually quite reasonable.
Certainly, and I wouldn't argue any differently. I don't know a great deal about running advertisements on YouTube but the screenshots and discussion I've seen about the tools they give advertisers to target their ads would definitely make it at least plausible to me that companies re-targetted their advertising or became more careful about it in the wake of the various controversies. But that's not what the 'adpocalypse' claimed to be. The claim, made by YouTube and used to justify many of their actions, was that advertisers were completely pulling out of YouTube, and doing so in significant numbers. The reality, at least according to the earnings reported to shareholders and legally attested to in filings to the FTC, is that this has never happened. Which, really, makes a lot more sense. The quantity of content uploaded to YouTube is positively astronomical, so any claim that there is a dearth of advertiser-friendly content would be fairly suspect. Even if they pulled ads from 99% of the videos uploaded to their platform, it would still leave multiple years worth of content that could be used to carry ads every day. What might actually be a bigger problem for them isn't so much the content that is uploaded as it is the content that the public voluntarily consumes. There's more than enough educational content uploaded to the platform every day to occupy the audience, for example, but of course that's not where the majority of people are spending their time. Much of that blame probably falls at the feet of YouTube themselves given the way their recommendation systems operate.
They claim that this is backed by machine learning, but I have a hard time imagining how. I've seen many machine learning systems, but never one whose entire model whip-saws around with the schizophrenic rapidity of YouTube. Watch just one single video about something like a conspiracy theory or anti-vaccination nonsense out of cursiousity and their "machine learning system" will immediately conclude that you want to watch large quantities of those kinds of videos and your recommendations will be filled with them for quite awhile. It seems obvious to me that they're not training any kind of model to predict the viewers desires. Not even very naive Bayesian models would predict that 1 data point outweighs every single other action by a user and translates into a massive shift in interests. There seems to be a strong tendency for their systems to ignore individual preferences as much as possible in favor of funneling users into broad categories that the system knows better. I wouldn't be surprised if the way things work were that it took something like a conspiracy video, groups it with other conspiracy videos, and then just sends any viewer in that direction while ignoring the users own history, subscriptions, like/dislike status, and all other individualized signals. Given the scale, maybe YouTube simply can't afford actual personalized recommendation systems?
"Incendiary" is relative to tribal group though. Sure the Venn-diagram overlaps a tiny bit in the middle but for the most part what the far left and the far right get angry about is different.
Back to advertising potential, imagine some 10sec no-context dashcam video of some racially ambiguous dude who allegedly did something getting run down by a cop car. Whether an ad for a Ford Explorer is appropriate in that context depends totally on the tribe viewing the video.
Such things have existed for quite awhile. The difference over time is that they are becoming more common and more frequent, while disseminating them got easier and easier. Still, we only rarely saw quite the same kinds of viral common conflagrations of outrage as we have today versus what we had in the cell-phone camera and viral emails era. The structure of the social media of today is vastly different and omnipresent.
Technologists recognized the addictive power of games, applying those dynamics to social media. We tried our hardest to increase engagement, and that drew technologists into combining gamification and viral discovery to amplify outrage on social media.
Now we're in 2019. Looking back on it all, is it really any surprise? Now we're in 2019, and it seems like society is ready to let multinational corporations who have access to tremendous amounts of information about us, to monitor, censor, and censure us for "wrongthink." What could possibly go wrong?
Insomuch as there is a cultural "mainstream" then relative to that. Most videos on YouTube do not carry the potential for offending anyone (tutorials for baking chocolate cake etc) but the ones that play at the edge can strongly appeal to some audiences
My take on this is, there is always a certain level of censorship in every country, under every government and platform. Either you truly don't censor anything, e.g. allow ISIS propaganda on your media as well, or you acknowledge that somewhere a line has to be drawn, and extremism preached by the western right or left is still extremism, just as what ISIS preaches is extremism.
Otherwise it'd just be hypocrisy and double standards.
In the mid-20th century, the US equivalent of "ISIS propaganda" was "Communist propaganda", and Communist literature was in fact legal in the US and widely available in libraries, although not generally broadcast on TV. This was, in fact, held to be the major distinction between the US and the USSR: the USSR "acknowledged that somewhere a line has to be drawn", and the US did not.
We don't make peace with our friends, we make peace with our enemies.