The problem is not bots; it's that money can flow towards troll farms consisting of real individuals, posting of their own free will, but massively amplified. And it's almost always cheaper to sow discord than to scale a "fact checking farm." I'm not sure if democracy has actually proven itself to be stable; perhaps it's an unstable system whose oscillations were artificially dampened until 2016, because this strategy and the requisite technology had not yet matured.
The problem is information DDOS. We, as individuals, are being overwhelmed with enormous amounts of information of vastly varying quality and intention, and we don't have the tools to process it.
This leaves us very vulnerable to manipulation by anyone who has even a slight edge in processing and disseminating large quantities of information.
I heard a very interesting comment on a podcast recently. This problem has traditionally been solved by intermediate organizations like universities, news orgs, political groups, unions, trade guilds, religious groups, etc. People selectively choose which of these orgs they trust, and then receive heavily curated information from those groups.
In the digital age, the influence and reach of these intermediate groups has been decimated / overshadowed in many ways. The proposed solution on the podcast was that we need to foster new ways of empowering individuals to connect with next-gen intermediate organizations in order to mitigate the Information DDOS. For example, Facebook could get out of the content filtering business entirely, and instead provide a completely agnostic platform that intermediate orgs could live on top of as a curation layer. People then choose which of those groups they receive content from. I think the idea needs more work, but I found it very interesting and relevant.
Of course, the intermediate orgs have not always done a perfect job, and have their own forms of manipulation and corruption. But throwing them out entirely without a proper replacement seems to be a huge mistake -- especially as the amount of information expands very rapidly, as is happening now.
>In the digital age, the influence and reach of these important intermediate groups has been decimated
I'm not sure that's a bad thing. Those intermediate organizations have been biased in the past and seem to have become increasingly biased to the point of becoming useless for those that want to focus on facts rather than opinion & commentary.
We know that widely available information is generally good for democracy. This could actually be a step in the right direction, but there is some adaptation that society has to undergo in order to efficiently filter the new influx of information.
Whereas before there was inherent trust of published info, I think a general skepticism over new information is likely to develop.
I hear this reasoning regularly, and I'm still not convinced of it. I don't necessarily think the old way was "good", but the "least bad" workable solution. Centralised media and so on do have inherent biases, but they are also held accountable when they publish incorrect information.
The problem we have today is not that information is widely available, but that disinformation is widely available and all but impossible to distinguish from real information. We're seeing time and time again, in elections across the world, that false propaganda spread virally through social media is having a meaningful effect on the way people vote. I, too, would love to imagine that a "general skepticism over new information" will develop, but we're not seeing any signs of that. If anything it's the opposite.
Those 'intermediate' media companies can be purchased. Who exactly is holding them accountable and to what? I'd argue that both Fox News & MS NBC are biased to the point of being useless for informational purposes and objective thought.
I also wonder how much of this 'fake news' people actually believe vs. using it as justification for their existing beliefs & non-rational biases.
Globalization, LGBT rights, open trade, open borders, etc. have happened relatively quickly and people resist change.
> Those 'intermediate' media companies can be purchased.
They can, yes. But we know who purchases them, and can make educated guesses towards the intent behind buying such a thing. For instance, the mass purchase by Sinclair of local TV stations in the US. We couldn't stop that from happening, but it meant that when there was a sharp shift in the news coverage those channels provided, we knew why.
> I'd argue that both Fox News & MS NBC are biased to the point of being useless for informational purposes and objective thought.
I'm not going to argue with that- like I said, the current system isn't perfect. But the likes of the NYT, BBC, WaPo, WSJ etc. do issue corrections on stories, and are held accountable, broadly, by each other and the public at large, because what they publish is public and known. By contrast, the new world we find ourselves in where posts can be advertised to specific audiences isn't public - not really - so there can't be any accountability at all.
Those intermediate organizations have been biased in the past and seem to have become increasingly biased to the point of becoming useless for those that want to focus on facts rather than opinion & commentary.
I'm not convinced that is true, but even if it were the important question isn't "are they biased?" but "are we replacing it with something worse?". So far, the latter seems plausible.
Developing a general skepticism isn't a bad thing in and of itself, so long as it is tempered by the real goal which is to have a reliable mechanism for evaluating and accepting new information.
We know that widely available information is generally good for democracy.
This is only true if the information is correct, and quite the opposite if we are talking propaganda etc.
Thank you, absolutely agree. Everytime I hear someone say that supposed "liberal" institutions are "biased," I genuinely struggle to come up with an answer to the question "biased towards what?" other than "objective reality."
How about this idea of "safe spaces"? And also the idea that it is generally okay to attempt to shut down, yell over, or ban speech or ideas that they don't agree with.
That is not what propaganda means. By its very nature it [tends to be] misleading.
By all means clear communication of facts is beneficial for democracy. But that is not what propaganda is.
[edit: dragonwriter pointed out it is not strictly necessary for it to be misleading - this is true and not what I meant to suggest. In practice nearly always is misleading. It's purpose is to manipulate, which is always detrimental to democracy imo]
> That is not what propaganda means. By its very nature it is misleading.
No, propaganda is designed to influence belief and behavior in a particular way (the defining example was material designed to lead people to the Catholic faith; “propaganda” comes from the former [pre-1967] Latin name of the modern Catholic Congregation for the Evangelization of Peoples [0]); it is not necessarily intended to be misleading.
I think the biggest issue is just that, we've seen a huge increase in people and groups' ability to have their voice heard without a commensurate increase in the ability and willingness to evaluate those voices filter out bad information/voices. Without the mechanisms at the personal levels to filter out the voice just yelling nonsense for their own gain increased information isn't good it's actively detrimental. This is because people don't have an infinite capacity to consume information and evaluate it against everything else so we wind up in a bunch of smaller communities where the individuals are getting a full helping of information but it can be completely skewed and distorted. At least in the old more monolithic days there was a more common set of facts people were working from where today it seems more and more I'm arguing the basic facts about the state of $ISSUE rather than the merits of one approach to it or the other.
But we already have "next-gen intermediate organizations" that filter the news for us. I carefully curate people I follow on Twitter, subreddits I'm subscribed to on reddit, podcasts in my feed, etc.
This is a good point, we do have some. But I think there is vast room for improvement. For one, the examples you listed require significantly more effort and care to manage.
Perhaps the best platforms of the coming decades will succeed by providing very powerful tools that enable intermediate orgs to more effectively scale up alongside the massive flow of information while remaining true to their core values.
Reddit is probably one of the best examples right now. But I think it is only a glimmer of the potential.
I also think that we have to somehow remove the incentives that push for blindly increasing "engagement" at all costs. The obvious targets being digital advertising and the grotesque, anti-human ad-tech industry that it has spawned.
Because a single person can't be an expert at everything, and at some point, individuals must delegate trust to a third party.
I'm not an expert on climate change, but it's an issue that I may care about. So what do I do? Drop everything and spend the next 10 years becoming an expert?
I think for myself plenty. I have realized that it is foolish to completely abandon accumulated collections of knowledge and wisdom and try to do everything from scratch by myself. Yes, they have flaws, but everything has flaws. Any alternative framework or approach must do significantly better with less flaws.
Of course, constant effort is required to reevaluate things, to account for known flaws, and so on. I would never advocate for docile, naive acceptance of authority.
Edit: I should have also mentioned that I'm a strong proponent of applying a reasonable degree of skepticism to all incoming information, as well as being comfortable with uncertainty. Saying "I'm not sure about that but I'd like to learn more" is a superpower.
Why don't you build your own house, build your own car, filter your own water, generate your own power, build your own transit system, enforce your own food regulations, enforce the boundaries of your own private property, create, distribute, manage your own currency? Do it yourself ffs, it isn't that hard.
The organization name is ominous but I learned about "truth decay" last night. According to a report from the RAND Corporation, what we're seeing today in terms of "fake news" is not new at all, and has been seen multiple times in the last century usually after a disruption in the communication industry like newspapers, radio, and television. I don't know about fake/manufactured consensus through multiple accounts but Chomsky (Manufacturing Consent: The Political Economy of the Mass Media) would probably argue that that's not that new either- maybe just with a different face or in this case, faces.
According to On Tyranny: Twenty Lessons from the Twentieth Century by Timothy Snyder, democracies fail all the time easily. You go to sleep in a democracy, you wake up the next morning in a fascist state. You lose rights, you lose the ability to vote in elections, and then within a month you're in a "state of emergency" aka a fascist state for the next 12 years (Germany) or still to this day (Russia). Snyder's book shows that tyranny or fascism is a deliberate systemic process.
Money has been spreading misinformation long before troll farms. Fox News doesn’t care about fact checking. We had birther conspiracies, Obama is a Muslim sympathizer, immigrants are coming to kill you, and other crazy narratives peddled by the most watched cable news channel for over a decade.
The difference is that until recently, you saw this stuff on Fox, and other stuff elsewhere. Sure, some people (a lot of people!) only watched Fox and swallowed it all, but a lot of people didn't. They knew that Fox was Fox, etc. Nowadays, in a pile of YouTube comments, or Twitter replies, or Facebook posts, you cannot tell who or what is behind it. You know when a story is broadcast on Fox that it's Fox. When it's a comment, is it a real person? A bot? A troll-farm worker? If one of the latter, where is the money coming from? There's no way of knowing. Add to this group polarization[1] and other known psychological factors, and it's easy to flood out a particular online space with a particular narrative, with no visible accountability.
Money has been spreading misinformation long before troll farms. Fox News doesn’t care about fact checking.
It goes back far before that. Even into this nation's early history; and all the way back, at least, to the Protestant Reformation.
Check out the Library of Congress image collection. You wouldn't believe some of the fliers (the "memes" of their time) published about Abraham Lincoln and George Washington. They make today's trolls and alt-right look like nuns.
I'm conflicted here. YouTube is an awful place to get political commentary, so I'd say that anyone whose opinion is primarily decided on YouTube has made a mistake to begin with. On the other hand, a LOT of people seemingly do get their political commentary from YouTube, and we might want to mitigate the damage that that causes. On the third hand, YouTube is not a public forum and doesn't have to do anything for us.
I've noticed what I belive to be fake YouTube comments around politics and financial topics especially. For example, sometimes I'll watch a video related to the US stock market or currencies and then in the comments there will be a heavy bias against US corporations or the USD. I remember one video talking about stocks dropping had maybe ~60 comments in total, yet ~50 of which all heavily biased towards China and talked about how China is great and the USD will fall. Which I understand people out there have that opinion, but seeing 50 people all saying basically the same thing seemed like a well-orchestrated plot.
I've noticed many instances of this and usually, I like looking at comments to find different views or different opinions but sometimes it's a bit too unanimous. I have a suspicion that many of the top comments are fake and made to look like a bunch of random people commenting on the video when it's really a group of people that are trying to change how people think. Which sounds silly but people follow the masses, if the majority of YouTube comments are saying that x thing is bad/good, then people just see that and start believing it as fact since everyone else seems to belive it. I mean the entire comment section is saying that x person is bad, plus there's 100 upvoted on the first comment going into why that is, sooo it must be true :P
If you read a lot of comments on the same subject, they all start to look the same after a while. I don't know how you could tell the difference between a bot and a bunch of people who share the same ideology.
Either way, it's not a random sample, which is what you'd really need to gauge average opinion.
Sorry, seriously though, have you noticed this with HN comments? I sometimes get that feeling when browsing here, I really notice it on the comments that immediately flood in when Apple are having a conference....
To be clear, I wouldn't say it's a fully programmed bot. It's probably a real human writing the "fake" comments. I wouldn't be shocked if a couple of governments had tens of thousands of YouTube accounts that they created many years ago and then let age and try to make it act like a real account for a couple of years. Of course, over time they'd start using it to write comments that are related to their agenda. Say, every 30th comment or so could be related to pushing their agenda and as long as each comment is different it'd be hard to ban such an account.
This type of operation sounds silly until you realize you could push your agenda to tens of millions of unique people reading the comments section on YouTube alone. If this idea sounds crazy, just remember the recent US elections were found to have a similar scenario occur with fake websites being pushed on various social media platforms and eventually people believed and trusted the articles written on the sites and started reposting, liking, etc. the fake news.
"have you noticed this with HN comments?"
As for HN comments, I don't spend a lot of time reading through comments here to know. I'd assume that it happens on every social media platform. If this is occurring on YouTube, Facebook, and Twitter, then every platform is in the targets. It's all about the eyeballs, so I'd be shocked if there weren't already thousands of fake accounts already opened either aging or already in use on HN. If I were running this type of operation, I'd have thousands of accounts on each platform: Pinterest, Twitch, Quora, TikTok, Snapchat, you name it, I'd have it. If it has millions of users per month and is a social media platform where discussions and opinions occur *or might occur in the future (new potential features on places like Snapchat), you better believe I'd have accounts on it.
And Hacker News comments are any better? The core issue becomes: who do you trust? If you read the comment and blithely accept what it says without thinking about it, you'll fall for any scam that comes along. You need to examine the ideas within the comment, compare it with other information (from hopefully different sources) and then come to a conclusion as to it's veracity. It's part of being in a society where freedom of speech is a right (a US centric viewpoint, since that's where I'm from).
I never said that anywhere was any better. I'd like to think Hacker News is a place where fewer people would be fooled, because this is a place specifically for critical thought and discussion.
YouTube is not. It's a website that everyone uses, and where the most impressionable or least politically aware members of the population are the most numerous, easiest targets.
And most importantly: YouTube lacks a dang. The volume of comments that are processed vastly exceeds what human moderation could possibly do, and community up-or-down-flag moderation of comments has massive known vulnerabilities to sock-puppeteering and signal-jamming. It's an extremely ripe target for this type of manipulation.
If YouTube cared about anything aside from "user engagement", they could quite easily whip their comments into shape. They're not really trying.
> The volume of comments that are processed
> vastly exceeds what human moderation could possibly do
For every 500 comments, there is one video. For every 50 videos, there is one Channel with a Google user behind it. Force creators to handle the complaints about comments on their videos... an order of magnitude less work.
I've gone into this before and there really seems to be total buy-in here to the idea that these companies couldn't possibly manage to weed the toxicity off their platforms. I don't believe it for an instant. It's true only if you refuse to entertain any decrease of "user engagement". If you are willing to lose some eyeballs, the problem is manageable.
And that sacrifice, in the case of YouTube, isn't even a sacrifice! YouTube would wind up with more users if the comments weren't so toxic. Susan et al just don't have the courage to change anything.
The purpose of trolling and fake comments is not always to directly shift anyone's opinion. It can also be about spreading doubt and confusion by constant repetition of falsehoods, until certain topics and positions that previously were completely unacceptable become an ordinary part of discourse and a viable "view". It's about manipulating and eroding social control mechanisms.
For example, that's how it can become possible for a politician to get elected who openly advocates torture or extrajudicial killings.
The most significant difference is that comments on HN tend be intelligent, well-reasoned and courteous, whereas comments on YouTube tend to be the psychotic rantings of paid trolls, foul-mouthed children, and hate groups.
Like you say: I think it's a waste of time to condemn entire platforms - there are really interesting people in these places that can be always discovered. Here and on YouTube. Just be aware that there you might occasionally find yourself in a zone besieged.
People make claims about shills, bots, astroturfers, and so on, all the time, also on HN, and experience has taught me that nearly all of this is projection: seeing what you want to see, or fear to see, or some other pre-existing perception. Overwhelmingly, people are seeing patterns that they themselves are reading into the material. I really do mean overwhelmingly, at least in our little corner of the internet. I don't know whether it's different for YouTube comments but don't see why it wouldn't be.
Without some corroborating evidence, these claims don't hold any weight. It's not that they're necessarily wrong, it's that there's no way to tell. Imagination produces just the same perceptions, and there is a ton of imagination going around.
Occasionally we do find evidence, and then we can do something about it. Absent that, though, the default explanation has to be that you can simply see anything in anything, especially when emotions are involved. Note the "I fear" in the title of this submission.
What's frustrating is that users don't have access to most of the data, making it harder to look for evidence themselves. They have to rely on someone else to do it and then trust what they say. In a low-trust social climate, that's working less and less, which only fuels more imagination.
It's a pretty easy question to answer: Yes, they do. Look at the engagement in terms of likes and replies on almost any video. Clearly people are reading and participating.
That's exactly what I was thinking. I'll sometimes skim comments during a boring bit in a video (such as an old-school NES/SNES speedrun), but I don't attempt nor expect serious discussion or rebuttals. I imagine the noise-to-signal ratio is even worse for videos with any type of political content.
I sometimes read them, especially on my own channel. Surprisingly, I haven't really encountered too much in the way of political idiocy there, perhaps because I don't tend to make videos about politics/anything that could get an ideological reaction.
So if a channel is fairly obscure and doesn't focus on politics, the comments can be... bearable.
But anything remotely popular is a hive of scum and idiocy that most people will rightfully not touch with a fifty foot bargepole. Less political there just means more 12 year olds posting outdated memes in place of the Russian bot farms and political trolls.
I only look at the first 5 comments, incase the youtuber responds to them or likes them directly. Everything else is just repetitive low quality spam usually
They are not available on TV (Smart TV, Fire Stick, PS4...) which is used more and more. So I guess, fortunately, they are not that important to Google
Wow. I thought this was going to be a discussion of how Google/Youtube was manipulating the comments. But it was merely about some silly bots fooling a bunch of kids. Youtube is never going to do anything about that. They prefer the "Community Guideline Strike" Hammer or take down your channel completely. They can disable comments completely (if they want to) or change them here and there. I've seen complaints about this but it's hard to prove. But we've all probably seen videos with COMMENTS HAVE BEEN DISABLED (and not by the owner). They WANT you to be manipulated by the comments on channels they allow. This guy is completely missing the boat here...
Author here--it sounds like you're talking about YouTube banning comments on certain videos? If you're talking about YouTube changing the content or social proof on individual comments, that sounds like a conspiracy theory to me.
I dislike how people use "conspiracy theory" so dismissively these days. They were likely referring to what was shown in this video https://www.youtube.com/watch?v=ptiWBrd9YbQ.
I believe the real issue is Google itself manipulating the comments for their own ends -- to make money and market to you. Bots fooling kids is not something they care about.
It's not just comments - the ability to drown out other views applies to actual news articles too. We saw back in 2009 with the Time Person of the Year campaign how easy it was for coordinated groups to drown out the public (https://techcrunch.com/2009/04/21/4chan-takes-over-the-time-...). As legitimate news organizations increasingly use analytics as part of their decision processes, that unsolved vulnerability presents an opportunity for click farms to drive traffic to articles and indirectly manipulate the production and placement of actual articles as well =/
To me, this feels like a precursor to the actual issue that's going to happen.
Eventually, we won't be able to distinguish between real humans and bots online. No CAPTCHA will work, no Turing test, none of that stuff.
Message authentication (e.g. keysigning, or a centralised provider you trust) works for individual identities that you're already aware of, but the general case of trusting comments online?
Even without all of that, you've never had any idea whether the "person" replying to your comment is paid, or has a bias, or whatever else.
If, say, 30% of Hacker News commenters were "fake" in that future scenario, would we know? How would we know? Once we develop a method, can't that be gamed?
As for HN, do you think it’s less than 30% today? If so/not,how do you know?
What do you count as “fake”? Actual bots, sure, but what about a real person running an automation that spams overnight? What about many real people mass-typing “opinions” on behalf of someone who pays them? What about a single businessman typin a falsehood that helps his business? Which of these count as fake?
It's one of the biggest problems that I worry about with the internet. Yes, public key cryptography can prove that "message x came from source y". But nobody's figured out a way to prove "source y is a person" or even better "source y is definitely the person they say they are".
I like the approach projects like Keybase and Stellar are taking: Building a decentralized network of trust based on existing, real-world trust-based relationships.
Absolutely nothing, unless diverting attention counts. A few product announcements, something something net neutrality, a little more "juice" for every possible news item with an anti-Facebook or anti-Twitter vibe, and voila! Nobody's talking about YouTube any more. They're in the attention business, after all. They know that letting attention dissipate is their best strategy in this case.
In a better world, they'd be using some of those vast resources and ML expertise to detect the kinds of vote/comment clustering that signal both commercial and political manipulation. Others have done it already. The fact that the self-styled "best and the brightest" are way behind on this says that they don't have the interest.
Yes. Facebook has been doing this for spam for ages, for Russian and Chinese and Iranian political influence rings more recently. It's sure not easy at that scale, but it's not beyond Google's capabilities.
Disclaimer: I work at Facebook, though I'm only involved in those efforts to the extent that I work on the data-storage systems that support the data scientists.
The problem with online discourse in general is less the existence of the troll farms and more an abundance of people who voluntarily produce and endorse stuff in a manner essentially indistinguishable from the troll farms.
If you want people to endorse the veracity of your scam, you'll have to pay. Want them to endorse the veracity of claim you've just made up about $politicalentity and you won't...
I thought about it deeply. I think in an utopian future we should give a monopoly on politics to a few public news outlets which have to as transparent as Ethereum is. Plus a few strictly enforced rules, e.g an article is either an opinion, boldly labeled as such at the start and the end, or a fact sheet.
That will have the effect of said news outlets being extremely scrutinized.
At the same time, any platform such as Twitter, IG & FB should be made illegal worldwide. The only content based platforms that are fine would be heavily scrutinized educational platforms where no politics are allowed.
I know it'll likely never happen. But what about 1000 years from now? Maybe so.
There's no other way around it. Information is power, and those systems are ripping apart humanity.
The printing press and propaganda were/are also obviously bad. BUT. If they were made hyper transparent, hyper centralized, and obliged to follow a few rules such as clear labeling, it'd be way better imho.
Said news outlet should be just one. Regulated by some new UN entity.
"YouTube the most-used social media site in the world"
By a certain measure that may be true, but if you were to actually measure how often people read and post comments, and what percentage of users do, I think you might not reach that conclusion.
There is a social media aspect to YouTube, but I don't think that is its main function for the vast majority of users (who are probably better described as "viewers").
(that said I think the article is good, and YouTube comment quality should be an embarrassment to Google)
If an attractive Russian seduces a politician for the purpose of espionage and influencing policy, their spouse is not going to excuse them because "Oh you cheated on me because of Russian meddling!"
Like the marriage in the aforementioned hypothetical, left of center politics is failing worldwide for other reasons, not because of some nefarious social media plot. People are growing fed up with government as an entity, and as a result, they want less government in their lives. Over the past few decades, the practiced manner of a politician became something that was mimicked and parodied worldwide, and popular culture widely regarded the politician as a crook for far too long. Even some of the most charismatic leaders who were beloved worldwide - most recently President Barack Obama, have been wrought with scandals like Snowden's NSA revelations, and the weak and dishonest manner in which the President responded when the activities of the intelligence agencies were exposed. Even during his first term, he made the (perhaps superior) decision to bail out the Wall street banks, and was subsequently betrayed by the various corporations who gave large payouts to top executives. To working class folks, these things are unforgivable especially when they come at the hands of someone you admired, even loved.
The trust in government as an idea is so low that all someone has to do is stand up and say "these crooks and rascals have lied to you all your life. They have practiced and poll-tested speeches but I am honest, raw and I speak from the heart. The fake and dishonest media there will tell you that I am a thug and I have said this and that. I am a thug, but I am your thug, give me a chance to show these crooks how it's done". The public will fall in love, because after what so many parts of the world have witnessed from their governments for so long, this kind of talk is like sweet seduction.
This kind of post "I read YouTube comments written by bots and the world is going to end because my party lost the last election and they may lose the next one" are disappointing. People can vote for political candidates based on any whimsical or nonsensical reason be it race, religion, "he has a kind face" - whatever they want to. Voting is an individual right. Any attempt to influence a voter - whether it's a bot farm or a person writing an article to stop a bot farm, means only one thing: "I want this person to vote the way I vote". Democracy needs a free market of ideas. No number of botnets are going to convince me to vote for someone who espouses nazism, or worse, communism. If someone else is swayed, it is their right to be, that's okay. Great ideas will win in a free market, bots or no bots. All ideas, no matter how objectionable, should be heard.
"Great ideas will win in a free market, bots or no bots."
This is a slogan, not an empirical fact. There are plenty of instances of market failure in cases where information about goods is unreliable or manipulated. It's a whole sub-discipline in economics!
In the case of politics, actors have a strong incentive to cause the information market (if you want to put it that way) to fail.
It’s a little odd to me that people are so focused on supposed Internet comment manipulation when we’ve had one of the largest media empires pushing hate 24/7 through mainstream channels like Fox News for at least a decade.
It is normalization bias. Teenagers dancing to <new music> is dangerous! Organized crime, oligarchs, epidemics, pollution, car accidents? A shrug and "Eh what can you do about them."
I suspect a lot of cognitive dissonance is evolutionary survivalship bias of "don't question the big man with the club/sword" and also why public speaking is commonly feared worse than death - because it could kill more than just you in barbaric days.
The existence of fox news does not negate the value of identifying other, possibly more subversive methods of manipulation. If anything if we were satisfied with calling out only fox news then it has does it's job.
I suspect that people that believe wholeheartedly that Fox News is some evil bastion of hate-mongering demagoguery don't actually watch very much of it. It's considerably less hysterical than the national nightly news I see on the old big three.
If you believe a significant chunk of the voting population is swayed by YouTube comments in their political beliefs, you could just as well be arguing for forcibly sterilizing the feebleminded.
There are other potential problems than swaying voters, though: swaying non-voters (to keep them uninvolved) and swaying potential future voters (such as teenagers).
To the extent that people engage in any format, propaganda in that format is concerning. And like it or not, people do engage in YouTube comments.
If you think that being smart and vigilant is enough to protect yourself from this type of manipulation, I think you're wrong. I find it very easy to fall for social proof in internet comments, and I've been in the tech industry for over 10 years. This has nothing to do with intelligence.
It's not just YouTube. Twitter and the rest are vulnerable to the same problem. The whole idea of humans socializing on the internet is vulnerable to this.