All Facebook apps at the time could request the same permissions as the CA apps - I know this because I built many of them for ad campaigns, such as contests and games.
A lot of game/contest mechanics relied on things like posting on friends' timelines, or your own timelines, etc.
In my case there was no real interest from our customers in taking the data and doing something with it, but if the intent was there, it could have very easily been done.
Furthermore, when my firm did security and pentesting projects, we routinely found exploits that could eventually lead to us getting the access tokens and secrets for their user database, or the app token and secret. If I had to guess - 70% of the game/contest applications that we audited had these vulnerabilities after they had been launched leading to companies approaching us us panicked, asking for help with users who were cheating. Many were aghast to learn of what could have potentially happened over and above rigging the leaderboards.
It's not enough that there were apps out there who had the ability to get this data. It's also not enough that there were apps who wanted to get the data intentionally. You also have to consider the number of apps that had this data and had weak security. Someone could have just broken in, taken the keys, and taken their users' data that way. Nobody would have known. At best it may have emerged in Facebook's "auditing process" in 2017 that they conducted to see which apps performed these kinds of queries - does anyone believe they did this in good faith and disclosed 100% of such apps which looked at social graph, friend posts, messages, etc?
Most users were uninformed and blindly hit accept at the permissions screen because they could win a car or an iPad or a Playstation by entering the contests.
In my view, the entire story was exaggerated and turned into a public spectacle, for very obvious reasons that nobody likes to admit to.
> In my view, the entire story was exaggerated and turned into a public spectacle, for very obvious reasons that nobody likes to admit to.
Are you claiming that online advertising doesn't work, or that gathering large amounts of data about individual users to tailor ads to them doesn't increase the effectiveness of those ads?
> Are you claiming that online advertising doesn't work
I'm not the GP, but I think the idea is that advertising isn't coercive. When I see an ad for a Swiffer WetJet, I'm not immediately compelled to go out and buy it against my will. I only buy it if it actually meets a need that I have. Ad-tech tries to guess who actually might be most likely to have that kind of a need and target advertising accordingly (eg it would be probably dumb to show a Peloton ad to a handicapped person, etc).
> gathering large amounts of data about individual users to tailor ads to them doesn't increase the effectiveness of those ads?
Is just a lot of words for "being able to find people that might agree with some ideas and showing them media that expresses those ideas". In a world with free expression, it's hard to see what the problem with that is.
From where I sit, there appears to be opposition to the fact that there exists people who have the appetite for certain "bad" ideas, and the only way to blunt those peoples' democratic power is to curb the ability for them to be exposed to those viewpoints.
A problem arises when the appetite for 'bad ideas' aren't merely trivial things like whether Rust is better than C++ but rather consist of ideas over which of ones neighbors should be discriminated against, persecuted, imprisoned, or killed.
Providing a platform where people who subscribe to these kind of ideas can self-organize, and connect to demagogues who would lead, them is not a virtuous defense of freedom of expression; it is a real threat to the liberties and lives of the people these ideas target.
Submerging people in sealed bubble of misinformation that only reinforces their prejudices--as Facebook does through a combination of advertising, recommendations, and control over user timelines--does lead to radicalization and that radicalization can be weaponized in ways that are fatal to others.
Just as the right to swing your fists ends at someone else's nose, the right of 'bad ideas' to be amplified by platforms ought to end before people who believe in those 'bad ideas' are turned into political weapons by demagogues and would-be autocrats.
> A problem arises when the appetite for 'bad ideas' aren't merely trivial things like whether Rust is better than C++ but rather consist of ideas over which of ones neighbors should be discriminated against, persecuted, imprisoned, or killed.
Sure, but I don't think it's useful to argue about the extremes since a lot of contemporary contention applies toward arguably non-violent ideas. For example, is being in favor of Brexit beyond the pale? How about campaigning for it? How about anti-immigration messaging? How about anti-gun control or anti-abortion? The reality is that the vast majority of the divisive targeted ads on FB are around those hot button issues. Almost nobody sees Holocaust-denial or outright genocide advocacy.
> Submerging people in sealed bubble of misinformation that only reinforces their prejudices--as Facebook does through a combination of advertising, recommendations, and control over user timelines--does lead to radicalization and that radicalization can be weaponized in ways that are fatal to others.
That's not a Facebook problem, that's a human problem. You either let people consume the ideas they want in a society where information flows freely, or you force people to "eat their vegetables" (so to speak), or prevent people from reading/seeing things that we think are "too dangerous".
> the right of 'bad ideas' to be amplified by platforms ought to end before people who believe in those 'bad ideas' are turned into political weapons
Should a politician be allowed to Tweet that too many people from a particular minority group are moving into the country? (Perhaps the politician believes that these migrants are increasing crime or unemployment).
If someone reads that Tweet and then goes and violently attacks people from that minority, have they been turned into a political weapon by a demagogue?
With fists and noses, there are well-defined lines, but I'm not sure what speech or algorithms you are suggesting should be outlawed.
Where to draw the line is a very hard question for which I don't have a a definitive answer.
Legally, there's precedent that RTLM[0] (Rwandan hate radio) went too far, as the people who ran it were convicted of crimes against humanity. How much lower to set the bar to protect the public while still preserving individual freedom of expression is a question I feel society isn't yet ready to answer.
I personally would set the bar at spreading false information. Most of the more vile efforts by demagogues to stir up violence are driven by fabricated propaganda. It's difficult to use facts to incite violence because reality is usually too mundane for people to get violently upset about. Non-factual propaganda, however, can be as lurid as it takes to make people get violent. Keeping outright fabrications out of public view ought to help keep some of these risks contained.
It's distasteful, for example, to spread valid statistics that connect immigrant populations with crime (in large part because such statistics, given without context, ignore integration and economic equity issues), but I feel spreading facts is less likely to trigger persecution/violence than feeding vulnerable people with a constant information diet of lies along the lines of 'immigrants eat babies.' As a result, my take on where to draw the line on political speech is 'say what you want as long as it's true.'
Thank you for volunteering to clarify. I've upvoted you, but I disagree with your points, I'm afraid.
> I think the idea is that advertising isn't coercive.
I'm not sure what definition of "coercive" you want to use, but I think it is absolutely possible for an advert to lead to someone acting against their best interest. (The term "best interest" is also quite loaded, but we can at least acknowledge cases where someone's actions are influenced by an advert and they then go on to regret those actions).
Perhaps a more concrete example would be cigarette advertising, which many societies have deemed somewhat "coercive" (albeit not as "coercive" as the nicotine itself). If these ads didn't encourage people to smoke, and merely assisted existing smokers in choosing the brand which best suited their needs, then I doubt they would need to be banned.
> "being able to find people that might agree with some ideas and showing them media that expresses those ideas"
While that sounds reasonable, I believe it misses some subtleties particularly in the case of elections.
Firstly, people should have a right to know which entity is authoring these ideas, so that they can be held accountable and any past reputation and potential biases taken into account. Just as importantly, the public should know which ideas are being selectively broadcast by each campaign, as it is more than possible that a campaign presents mutually contradictory arguments to non-overlapping groups of people, which again should factor into the credibility of the campaign.
If those concerns are taken into account (and the general concerns of people having their personal data used to target psychometrically-engineered adverts at them without their consent) then I have no objection to "bad" ideas being widely distributed.
> Firstly, people should have a right to know which entity is authoring these ideas, so that they can be held accountable and any past reputation and potential biases taken into account. Just as importantly, the public should know which ideas are being selectively broadcast by each campaign, as it is more than possible that a campaign presents mutually contradictory arguments to non-overlapping groups of people, which again should factor into the credibility of the campaign.
I agree with all of this, and it all exists in Facebook today. Political ads on Facebook prominently display who paid for the ad, and there's a centralized ad registry containing all active ads by campaign.
The other was that it enabled the political news media to restore their damaged perceived credibility by blaming foreign meddling as an excuse for their poor / echo chambered reporting of where their respective countries were at.
In the case of Brexit, the political news media was correctly reporting that the British public preferred Remain until a sudden swing in the opinions of undecided voters in the few weeks before the referendum.[0]
In the case of the US, the political news media was correctly reporting that the American public preferred Clinton to Trump.
Neither case should have damaged their perceived credibility, except in the minds of people who believed that there was a conspiracy of under-reporting the popularity of Brexit and/or Trump.
In the case of Brexit, the sudden swing to 'leave' happened very shortly after leave ran a massive campaign of deceptive dark ads on Facebook. This advertising campaign pushed factually untrue narratives (such as that the EU was on the verge of forcing the UK to be overrun by Syrian migrants)[1] to vulnerable people and, from all indications was able to turn the referendum in leave's favor.
The sudden swing towards leave does not seem to represent a failure in polling (or media) but rather a demonstration that Facebook's dark ads platform alone can subvert the ability of a high-functioning democracy to make an informed, fact-based, decision on a question that will impact generations of people.
That should terrify a lot more people than it does.
That's interesting. Something that catches my eye:
> The ICO found Cambridge had more or less the same tech in the 2016 campaign that competitors did—and even within the company, staff worried that Nix & Co. were exaggerating CA’s “impact and influence.”
> All those things you read about the thousands of data points CA had on American voters? Most of it was commercially available to anyone with the $. Points to larger issues of data & privacy but not something unique to CA.
The parties themselves own more voter information in their datasets than CA ever dreamed of.
It's one of the reasons for why third parties can't break into the American system - it's hard to compete without access to a curated-over-decades dataset of likely voters, swing voters, potential volunteers, single-issue voters, etc, etc.
The US is a two party system[0] because it uses first past the post elections. First past the post electoral systems can't support more than two viable parties.
[0] Nominally. In practice, the commonality of ideas between elected Democrats and Republicans, combined with structural advantages that give the GOP a disproportionate hold on power, mean that the US is more of a 1.5 party system.
Don't mistake "downvoted for complaining about downvotes in advance" for harassment.
That said, I do think Facebook PR deliberately played up the CA story as a way of masking the fact that any Facebook app at the time had access to this data, and many thousands were certainly siphoning it off in the same way.
The CA story may have been massively over blown but given what they were claiming to be able to do and their political links they deserved public exposure.
Drawing attention to the wider issues around use of personal data was a public service no matter what the motivation (and the coverage has been meagre given the importance of the issue).
I think the statements: "CA were ineffective charlatans" and "Carole Cadwalladr is misinformed" and "the methods by which CA attempted to influence the election were wrong" can all be simultaneously true.
I think they are (over?)reacting to the trope of "I voted Remain, but..." which seems to be a popular format for phrasing pro-Brexit opinions that has become a signal for insincerity through its suspicious overuse.[0][1][2][3][4][5][6] I'm sure many current supporters of Brexit were grudging Remain voters, but there seems to have been just as much, if not more, Bregret.[7]
You didn't technically express an opinion on Brexit itself, but you said:
"Cadwalladr is a total charlatan and low effort polticially motivated troll seeking to undermime democratic proces"
If you think that a British journalist reporting on law-breaking in a British referendum is undermining the democratic process more than the law-breaking itself does, then I think that says a lot about your opinion on Brexit.
Nevertheless, as you say, that opinion isn't relevant here, and I'm not the one doubting how you voted.
She didn't need to invent the law-breaking when there was enough evidence for the perpetrators to be found guilty even under a government run by senior Vote Leave figures:
* 16 August 2017, the Constitutional Research Council was fined by the Electoral Commission for "Failure to notify the Electoral Commission of political contributions it made (including £435,000 to the DUP), and gifts it received." Most of the money to the DUP (a Northern Irish party) was spent on advertising that was not circulated in Northern Ireland. That province happens to have the laxest rules in the UK about disclosing the actual sources of donations.
* 11 May 2018, Leave.EU was fined by the Electoral Commission for "Failure to deliver complete and accurate pre-poll transaction report and post-poll spending information".
* 17 July 2018, Vote Leave was fined by the Electoral Commission for "Failure to deliver a complete and accurate spending return; failure to provide documents on time."
* 24 October 2018, the Information Commissioner's Office found that between 2007 and 2014, Facebook had broken the UK data law then in force, the Data Protection Act 1998, and applied £500,000, the highest penalty allowed under that Act.
* 1 February 2019, Leave.EU was fined by the Information Commissioner's Office for sending over a million emails to subscribers without consent.
* 19 March 2019, Vote Leave was fined by the Information Commissioner's Office for sending 196,154 unsolicited electronic messages to people.
These are just some of the crimes that have been investigated and where the evidence hasn't been hidden by either deleting data, or perpetrators hiding behind donor secrecy or "national security" excuses. Is it really that hard to imagine that a journalist might genuinely be motivated by trying to bring this law-breaking to the attention of the public?
One only has to compare the reaction to this and the reaction to the announcement by the DNC about using Facebook to target voters. Very few people raised issues about the DNC app, but when they found out CA helped an unsympathetic campaign with similar data, oh boy, that’s terrible!
> The Obama campaign created a Facebook app for supporters to donate, learn of voting requirements, and find nearby houses to canvass. The app asked users’ permission to scan their photos, friends lists, and news feeds. Most users complied.
> The people signing up knew the data they were handing over would be used to support a political campaign. Their friends, however, did not.
> The people who downloaded the app used by Cambridge Analytica did not know their data would be used to aid any political campaigns. The app was billed as a personality quiz that would be used by Cambridge University researchers.
The Obama campaign's approach (consent on behalf of your friends) is now no longer possible nor permitted by Facebook's API and developer terms; Cambridge Analytica's approach was a violation at the time.
A lot of game/contest mechanics relied on things like posting on friends' timelines, or your own timelines, etc.
In my case there was no real interest from our customers in taking the data and doing something with it, but if the intent was there, it could have very easily been done.
Furthermore, when my firm did security and pentesting projects, we routinely found exploits that could eventually lead to us getting the access tokens and secrets for their user database, or the app token and secret. If I had to guess - 70% of the game/contest applications that we audited had these vulnerabilities after they had been launched leading to companies approaching us us panicked, asking for help with users who were cheating. Many were aghast to learn of what could have potentially happened over and above rigging the leaderboards.
It's not enough that there were apps out there who had the ability to get this data. It's also not enough that there were apps who wanted to get the data intentionally. You also have to consider the number of apps that had this data and had weak security. Someone could have just broken in, taken the keys, and taken their users' data that way. Nobody would have known. At best it may have emerged in Facebook's "auditing process" in 2017 that they conducted to see which apps performed these kinds of queries - does anyone believe they did this in good faith and disclosed 100% of such apps which looked at social graph, friend posts, messages, etc?
I am pretty sure the Obama 2012 campaign did something similar with their outreach Facebook apps too, as it was reported in the press at the time (https://www.theguardian.com/world/2012/feb/17/obama-digital-...)
Most users were uninformed and blindly hit accept at the permissions screen because they could win a car or an iPad or a Playstation by entering the contests.
In my view, the entire story was exaggerated and turned into a public spectacle, for very obvious reasons that nobody likes to admit to.