Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

YouTube is a major player in that space.

https://www.theguardian.com/technology/2018/feb/02/how-youtu... YouTube's algorithm recommended extremist/conspiracy content during the 2016 election.

Here's YouTube CEO Susan Wojcicki. http://www.telegraph.co.uk/news/2017/12/05/youtube-taking-ex... (behind a free registration wall)

"Since June we have removed over 150,000 videos for violent extremism."

"We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018."



> YouTube's algorithm recommended extremist/conspiracy content

Would you agree that the algorithm recommended this content because people believe in, or at least are interested in, conspiracies? Even if we feel that's a mistake, who are we (or Google) to be the arbiters of truth?


No.

Content recommendation algorithms reward engagement metrics. One of the metrics they reward is getting a user's attention, briefly. In the real world, someone can get my attention by screaming that there is a fire. Belief that there is a fire and interest in fire are not necessary for my attention to be grabbed by a warning of fire. All that is needed is a desire for self-preservation and a degree of trust in the source of the knowledge.

Compounding the problem, since engagement is improved and people make money off of videos, there is an incentive in place encouraging the proliferation of attention grabbing false information.

In a better world, this behavior would not be incentivized. In a better world, reputation metrics would allow a person to realize that the boy who cried wolf was the one who had posted the attention grabbing video. Humanity has known for a long time that there are consequences for repeated lying. We have fables about that, warning liars away from lying.

I don't think making that explicit, like it is in many real world cases of lying publicly in the most attention attracting way possible, would be unreasonable.


No one hears the lies once they're removed. That can be good, but it also might work to prevent more people from criticizing or exposing them and thus, even if only slightly, validate them in the eyes of liars. This might yield nastier lies in the future.


Google recommends that stuff to me, and I don't believe in it or watch it. Watch math videos, get flat earth recommendations. Watch a few videos about the migration of Germanic tribes in Europe during the decline of the Roman Empire, get white supremacist recommendations.

My best guess? They want you to sit and watch YouTube for hours, so they recommend stuff watched by people who sit and watch YouTube for hours.

This stuff reminds of the word "excitotoxins," which is based on a silly idea yet seems to capture the addictive effect of stimulation. People are stimulated by things that seem novel, controversial, and dangerous. People craving stimulation will prefer provocative junk over unsurprising truth.


>Watch a few videos about the migration of Germanic tribes in Europe during the decline of the Roman Empire, get white supremacist recommendations.

Ugh, that happened to me. I was halfway through a video making weird, interesting, yet really benign claims when suddenly they connected all the pieces in support of some bizarre white supremacist shit.

Just because I wanted to know about proto-indo-europeans.

I looked over to the recommended videos, and it was already like 20% armchair white supremacist videos.


I binge watch youtube in a small window while doing certain kinds of work and specifically don't want anything I'm too interested in. This leads to future "weird" recommendations.

I also have to say that there's a certain comic relief to videos by flat-earthers (no moon landing, ancient astronaut) and other conspiracy theorists. Some mix valid points with hyperbole but some are so far out there that they're comical.


>Google recommends that stuff to me, and I don't believe in it or watch it.

So? Obviously people are watching them.

Or are you saying that YouTube has a strong hidden conservative bias?


A friend asked me to watch a clip of Ben Shapiro so we could sincerely talk about what the other side believes. I forgot that I was on a fresh OS install, not yet logged in. Well, right after that one video, every single recommended video to me became a "Ben Shapiro SLAMS liberal...".

I get that YouTube had no other metric to gauge me by, by there are 10 or 12 recommendations and there's no law that says YouTube can't ease into recommendations. Whether you're for or against Shapiro, this kind of behavior induces people into bubbles and radicalizing in the feedback loop.


Actually, I noticed YouTube's recommendations seems _heavily_ biased towards what you watched recently. This is likely due to how people watch videos - much like people binge watching TV serials, they are very much likely to continue watching videos from the same or similar channels.

In most cases this is harmless - play a song, and YouTube will automatically create a playlist of songs in the same genre. Watch a cooking video, and it'll give you a dozen other recipes by the same chef.

Unfortunately this doesn't work for political videos, as you've noticed. But there is absolutely no incentive for YouTube to insert videos from opposing viewpoints into the recommendations. YouTube is just giving people what they want.


YouTube is sort of caught between a rock and a hard place here.

So you watch a video about how the history of vaccines and how many lives have been saved by vaccination. Should YouTube recommend a counterpoint video about how vaccines are going to kill you and everyone you love?

If you are recommending "other sides of issues", either you start showing a lot of really crazy things to people who wouldn't otherwise see it, or you need to start taking an editorial position that some issues are "settled" and one side is just "wrong", and do a one-way gating where you'll show a breath of sanity in the follow-up recommendations to "bad" videos but not show "bad" recommendations on videos that get the subject "right".

Which one would you prefer?


It's funny how the problem is so _us_ this time. Turn a mirror on our souls and we lose nuance, mediums, neutrals, and common ground. "Ben Shapiro makes a good point" becomes "BEN SHAPIRO DESTROYS SUBHUMAN SCUM."

Who is that message for? It's not a discussion or debate, it's linguistic flashing lights to simulate a community around a simulated conflict. Youtube red, solving alienation in a more profitable way than the last red revolution.

The crime of social media is that they've created public solitary confinement. The internet is now a place where you go to be alone with your thoughts as simulated by companies. So boring.


> either you start showing a lot of really crazy things to people who wouldn't otherwise see it, or you need to start taking an editorial position that some issues are "settled" and one side is just "wrong"

Or you could attempt to take a neutral, quality-driven stance on recommendations: which videos do a good, _honest_ job of explaining the case for a given point of view? Plus of course any videos that present a reasonably balanced look at the pros/cons of multiple sides. You can even still factor in which ones are most engaging, as a secondary element (e.g. preferring Bill Nye over someone droning on boringly about climate change).

Maybe that has a bias toward moderate positions and away from extremes, but that doesn't seem terrible since (a) it's what compromise is built on, and (b) it seems more likely to succeed in winning over more newcomers to that viewpoint anyway (does David Brooks convince more liberals to question some of their beliefs than Alex Jones? I'd bet so).

I think the real problem with this approach is it's very hard to build an _automated_ recommendation engine for this. At scales that matter (YouTube, Facebook, etc.) you need it to be automated and hard to game. And ML doesn't quite seem up to the task of judging ranking things by demagoguery, let alone honesty, at this point...


Well, I'd prefer annotated recommendations with reasons, and analysis of quality as a big feeder into the recommendations (so, both opposing and supporting--and even supporting in part and opposing in part--viewpoints, especially with stronger support, would be natural recommendations, with their relationships to the thing you just viewed annotated.) OTOH, that's probably a lot harder to do (especially automatically; intuitively, it seems likely to be essentially an AI-complete problem, though I wouldn't be too surprised if someone found a way to do a "good enough" version for common classes of content without a full AI) than what typical recommendation engines do; the annotated related items thing is found some places, but they are IME always manually-curated, narrow-focus databases (e.g., commercial legal authority databases.)


Just hit this on YouTube today, had to go down past hundreds of nearly-identical garbage recommendations to get to one of interest. At least they should toss in a tiny bit of RNG.


I mean, that style of title exists accoross the political spectrum on YouTube and it's beyond stupid. Whenever I see "X DESTROYS Y" titles I assume the video will be a waste of 3-10 minutes with no particular moment of linguistic destruction on either side and is in fact just two people bickering.

Just clickbait titles is all it is.

Regardless, I don't see how any of that can really count as "extremist" or "conspiratorial" content.


The crux seems to be that Google's algorithms are designed to maximize time spent watching. It's frankly absurd how successful they have been at this, especially in younger generations.

It's one thing when their team of PhD's extract attention into cat videos, and another thing to extract it into potentially unhealthy amounts of extremist material and conspiracies.

I don't know what the answer is, but I do miss the days before infinite scroll and autoplay.


One of my friends is the music director of a US freeform radio station. He’s a legend in the industry known for his encyclopedic knowledge of cool but highly obscure artists over the past 40 years. Over the past year, he has often been left mouth open when YouTube’s algorithm starts picking what’s next after we pick a starting point. “How the hell does it know about these guys?” That’s how good the robots have become.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: