Hacker News new | past | comments | ask | show | jobs | submit login

> Are you claiming that online advertising doesn't work

I'm not the GP, but I think the idea is that advertising isn't coercive. When I see an ad for a Swiffer WetJet, I'm not immediately compelled to go out and buy it against my will. I only buy it if it actually meets a need that I have. Ad-tech tries to guess who actually might be most likely to have that kind of a need and target advertising accordingly (eg it would be probably dumb to show a Peloton ad to a handicapped person, etc).

> gathering large amounts of data about individual users to tailor ads to them doesn't increase the effectiveness of those ads?

Is just a lot of words for "being able to find people that might agree with some ideas and showing them media that expresses those ideas". In a world with free expression, it's hard to see what the problem with that is.

From where I sit, there appears to be opposition to the fact that there exists people who have the appetite for certain "bad" ideas, and the only way to blunt those peoples' democratic power is to curb the ability for them to be exposed to those viewpoints.




A problem arises when the appetite for 'bad ideas' aren't merely trivial things like whether Rust is better than C++ but rather consist of ideas over which of ones neighbors should be discriminated against, persecuted, imprisoned, or killed.

Providing a platform where people who subscribe to these kind of ideas can self-organize, and connect to demagogues who would lead, them is not a virtuous defense of freedom of expression; it is a real threat to the liberties and lives of the people these ideas target.

Submerging people in sealed bubble of misinformation that only reinforces their prejudices--as Facebook does through a combination of advertising, recommendations, and control over user timelines--does lead to radicalization and that radicalization can be weaponized in ways that are fatal to others.

Just as the right to swing your fists ends at someone else's nose, the right of 'bad ideas' to be amplified by platforms ought to end before people who believe in those 'bad ideas' are turned into political weapons by demagogues and would-be autocrats.


> A problem arises when the appetite for 'bad ideas' aren't merely trivial things like whether Rust is better than C++ but rather consist of ideas over which of ones neighbors should be discriminated against, persecuted, imprisoned, or killed.

Sure, but I don't think it's useful to argue about the extremes since a lot of contemporary contention applies toward arguably non-violent ideas. For example, is being in favor of Brexit beyond the pale? How about campaigning for it? How about anti-immigration messaging? How about anti-gun control or anti-abortion? The reality is that the vast majority of the divisive targeted ads on FB are around those hot button issues. Almost nobody sees Holocaust-denial or outright genocide advocacy.

> Submerging people in sealed bubble of misinformation that only reinforces their prejudices--as Facebook does through a combination of advertising, recommendations, and control over user timelines--does lead to radicalization and that radicalization can be weaponized in ways that are fatal to others.

That's not a Facebook problem, that's a human problem. You either let people consume the ideas they want in a society where information flows freely, or you force people to "eat their vegetables" (so to speak), or prevent people from reading/seeing things that we think are "too dangerous".


> the right of 'bad ideas' to be amplified by platforms ought to end before people who believe in those 'bad ideas' are turned into political weapons

Should a politician be allowed to Tweet that too many people from a particular minority group are moving into the country? (Perhaps the politician believes that these migrants are increasing crime or unemployment).

If someone reads that Tweet and then goes and violently attacks people from that minority, have they been turned into a political weapon by a demagogue?

With fists and noses, there are well-defined lines, but I'm not sure what speech or algorithms you are suggesting should be outlawed.


Where to draw the line is a very hard question for which I don't have a a definitive answer.

Legally, there's precedent that RTLM[0] (Rwandan hate radio) went too far, as the people who ran it were convicted of crimes against humanity. How much lower to set the bar to protect the public while still preserving individual freedom of expression is a question I feel society isn't yet ready to answer.

I personally would set the bar at spreading false information. Most of the more vile efforts by demagogues to stir up violence are driven by fabricated propaganda. It's difficult to use facts to incite violence because reality is usually too mundane for people to get violently upset about. Non-factual propaganda, however, can be as lurid as it takes to make people get violent. Keeping outright fabrications out of public view ought to help keep some of these risks contained.

It's distasteful, for example, to spread valid statistics that connect immigrant populations with crime (in large part because such statistics, given without context, ignore integration and economic equity issues), but I feel spreading facts is less likely to trigger persecution/violence than feeding vulnerable people with a constant information diet of lies along the lines of 'immigrants eat babies.' As a result, my take on where to draw the line on political speech is 'say what you want as long as it's true.'

[0] https://en.wikipedia.org/wiki/Radio_T%C3%A9l%C3%A9vision_Lib...


> I'm not the GP

Thank you for volunteering to clarify. I've upvoted you, but I disagree with your points, I'm afraid.

> I think the idea is that advertising isn't coercive.

I'm not sure what definition of "coercive" you want to use, but I think it is absolutely possible for an advert to lead to someone acting against their best interest. (The term "best interest" is also quite loaded, but we can at least acknowledge cases where someone's actions are influenced by an advert and they then go on to regret those actions).

Perhaps a more concrete example would be cigarette advertising, which many societies have deemed somewhat "coercive" (albeit not as "coercive" as the nicotine itself). If these ads didn't encourage people to smoke, and merely assisted existing smokers in choosing the brand which best suited their needs, then I doubt they would need to be banned.

> "being able to find people that might agree with some ideas and showing them media that expresses those ideas"

While that sounds reasonable, I believe it misses some subtleties particularly in the case of elections.

Firstly, people should have a right to know which entity is authoring these ideas, so that they can be held accountable and any past reputation and potential biases taken into account. Just as importantly, the public should know which ideas are being selectively broadcast by each campaign, as it is more than possible that a campaign presents mutually contradictory arguments to non-overlapping groups of people, which again should factor into the credibility of the campaign.

If those concerns are taken into account (and the general concerns of people having their personal data used to target psychometrically-engineered adverts at them without their consent) then I have no objection to "bad" ideas being widely distributed.


> Firstly, people should have a right to know which entity is authoring these ideas, so that they can be held accountable and any past reputation and potential biases taken into account. Just as importantly, the public should know which ideas are being selectively broadcast by each campaign, as it is more than possible that a campaign presents mutually contradictory arguments to non-overlapping groups of people, which again should factor into the credibility of the campaign.

I agree with all of this, and it all exists in Facebook today. Political ads on Facebook prominently display who paid for the ad, and there's a centralized ad registry containing all active ads by campaign.


Yes that is one of the reasons I was alluding to.

The other was that it enabled the political news media to restore their damaged perceived credibility by blaming foreign meddling as an excuse for their poor / echo chambered reporting of where their respective countries were at.


In the case of Brexit, the political news media was correctly reporting that the British public preferred Remain until a sudden swing in the opinions of undecided voters in the few weeks before the referendum.[0]

In the case of the US, the political news media was correctly reporting that the American public preferred Clinton to Trump.

Neither case should have damaged their perceived credibility, except in the minds of people who believed that there was a conspiracy of under-reporting the popularity of Brexit and/or Trump.

[0] https://en.wikipedia.org/wiki/File:UK_EU_referendum_polling....


In the case of Brexit, the sudden swing to 'leave' happened very shortly after leave ran a massive campaign of deceptive dark ads on Facebook. This advertising campaign pushed factually untrue narratives (such as that the EU was on the verge of forcing the UK to be overrun by Syrian migrants)[1] to vulnerable people and, from all indications was able to turn the referendum in leave's favor.

The sudden swing towards leave does not seem to represent a failure in polling (or media) but rather a demonstration that Facebook's dark ads platform alone can subvert the ability of a high-functioning democracy to make an informed, fact-based, decision on a question that will impact generations of people.

That should terrify a lot more people than it does.

[1] https://www.joe.co.uk/news/brexit-facebook-adverts-192164 ; https://www.channel4.com/news/factcheck/factcheck-vote-leave...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: