Youtube's explanation is basically: "It was done by an algorithm. We don't know how our algorithms work. Sometimes they do this. We turned them off for this event."
It works exactly as planned. Google can prove to regulators that it makes sufficient efforts - which is enough to wash it hands. On the other hands malfunctioning algo allows Google to make some extra advertising dollars.
IE: Tons of online trolls trying to stir up false racist and xenophobic outrage on social media for this event, thus the YouTube algos picked up on some of this and linked the associated article.
But I can't -- really can't -- imagine how any Islamic fundamentalist group could claim responsibility for this. What would be the point?
So if anyone does claim responsibility, it's ~100% sure that they're just trying to incite Islamophobia.
Edit: OK, not ~100% sure. But it's either that, or they're crazy. And yes, I suppose that it's arguable that all terrorists are crazy. I still maintain that it's an obvious possibility, however. Not at all likely, but still obvious.
I am not saying islamists did this thing but are you really saying that it doesn't make sense for them to destroy places of worship of other relgions? In Indonesia and other places they sure think differently.
Maybe I'm deluding myself, but Notre Dame just seems so much more than a "place of worship".
But then, the Taliban did destroy monumental statues of Buddha in Afghanistan. Which were about twice as old as Notre Dame. So maybe I am deluding myself :(
And OK, I take it back. Nothing is "too shocking" to be all that unlikely.
But still, wouldn't you think that a terrorist attack on Notre Dame would involve something more like a hijacked gasoline tanker than a fire that perhaps started in the attic?
It's beautiful. A marvel of engineering. Such a major, iconic part of Paris. It's survived so many wars. The Nazis could have destroyed it, as they destroyed the Winter Palace outside Leningrad. And as the Allies destroyed so many cathedrals in Germany.
But it survived. And to be burned out by some idiot with an agenda would be so sad and pointless. Indeed, whatever cause the idiot was supporting.
But of course, it'd also be sad if it was totally an "accident". An electrical fire. Someone who left a heater running. Or didn't fully extinguish a cigarette.
Anyway, I'd be very suspicious if someone claimed responsibility. If for no other reason, because if it had been a terrorist attack, we would know by now. A key point in terrorist attacks is having them be obvious. Not looking like accidents.
I can see what Google tried to do. But it's obvious that the technology failed miserably, on a problem that likely should not have been "solved" with automation. What if there had been a terrorist attack? Clearly these messages would be not just incorrect, but unimaginably inappropriate. It's clearly not ready to be used now, if it's even possible to teach a machine what 9/11 conspiracy theories are.
The team that built this should be dissolved. This was a bad call on the engineering side, the product side, and the management side. Nothing about this was a success.
It should never have been built in the first place. It's an attempt to put a band-aid on the conspiracy theories on the platform, but completely lost sight of the fact that bad things do happen and that bad things often are visually similar.
Even if it set out to achieve its original goals flawlessly, it fails to actually solve the problem Google faces in the first place (being a platform for misinformation). If you got all the way to a conspiracy theory video, is Encyclopedia Britannica really going to change your mind? And being a not-flawless computer system, it will _always_ have false positives, which means as long as it exists it will flag events like this as misinformation. The latter, in my opinion, is inexcusable. Any engineer building any sort of classification system knows that there will be false positives, and nobody at Google thought that this would be problematic and stopped the system from being built.
I can't think of any place where this sort of "feature" would've been appropriate. An article from Wikipedia, which anyone can edit, is like the least convincing thing they could've linked to.
Of all the things to get angry at Google about, falling short every once in a while at solving an incredibly difficult technical challenge is just ludicrous.
Or maybe (because they might have some great programmers, but that's not necessarily the same as having smart people) they are "solving" something that isn't a technical problem in the first place with code?
What is the problem that they are solving? I don't want Google to be my lens on the world. I never paid them for that. It is for this reason that I unsubscribed from YouTube Red. They stopped being a neutral platform.
So the fact check technology became the conspiracy theory generator?