> I think the issue is that they could have taken steps to lower their rate, but didn't.
This is where it gets tough really fast. For arguemnt's sake, let's say semantic analysis can properly identify patterns well at 95% level (oooooffff already can see problems).
Public/wall comments on any platform are relatively easy to police. But, I argue the vast majority of problematic grooming comes from DMs. Policing DMs seems worse than trying to do hash analysis of private photos.
I don't think there's a win scenario here for anyone involved.
[edit]
> they could check the birthday of people,
yeah because this is answered honestly by predators.
> Policing DMs seems worse than trying to do hash analysis of private photos.
When it comes to DMs between adults and unrelated children, does it really seem worse? Because really what these platforms are enabling is a situation that's never really been common or tolerated before in society, unlimited unsupervised contact and rich (media-wise) communication between unrelated adults and children.
These platforms categorically do know when a user is an abuser, potential abuser or has in general been making inappropriate contacts with children and they could easily do something about it.
> When it comes to DMs between adults and unrelated children, does it really seem worse?
I should clarify when I say "worse" i mean "significantly more invasive". Checking if a photo's hash matches another photo's hash, at the very dumbest levels, straight forward, hash == hash or hash is inside larger hash.
But for DMs matching words isn't good enough because there are so different ways to say the same things. The system needs to understand when "I really like that top you're wearing" is a cute message between friends vs a predator because who as well as the lines right after really change the conversation. If you don't know the "who" then now we're asking an LLM does this conversation sound predatory!?
The machine needs a lot of context to get this right versus photo hashing analysis.
THat's why to mean analysis of chats is way "worse" than analysis of photos.
> If you don't know the "who" then now we're asking an LLM does this conversation sound predatory!?
But you always will know the 'who' - so, you (the platform) knows that it's a conversation between an adult and a child unrelated to them and yes, at this stage, simply asking a LLM 'hey is this conversation sus' will give you results that are more than enough to trigger a human level analysis of the interaction.
Also you have to consider that adults and unrelated children generally do not interact at all. So, if you wanted to legitimately try and screen off abusers, 'adults that initiated more than 5 conversations with children unrelated to them' then feeding all those interactions into a LLM classifier would get you 99% of the way there.
I’m not able to read the cited reference at that link, so I’ll try to take the statement at face value.
I could name or describe a number of systems right now that have outcomes that we could all agree are contrary or misaligned with their purpose, so unless the author is equivocating things like “purpose”, or “system”, I don’t think your premise is well founded, nor do I think your conclusion has merit.
The conclusion is that systems should not be judged deontologically, and instead should be judged consequentially.
The point is that if you could describe systems that are contrary to their purpose and have negative outcomes, one is justified in referring to them as systems whose purpose is that negative outcome. Doing otherwise is arguing from conclusions.
It's naive not to. Believing that Facebook's purpose is what it intends to do is like believing that North Korea is a democratic people's republic.
I can rephrase it in terms of intentionality if you wish:
Meta has a choice every day to exist or not and the not. When choice to exist results in the sexual harassment of children then transitively Meta is choosing that consequence.
To address your knife analogy, a knife alone is not a system. A knife plus a human plus another human being stabbed by the knife is. In that scenario, the person creating the system, the knife-wielder, should(like Meta), choose to dissolve the system. I think, at least for the knife+human+human stabbing system, that's quite easy to agree with.
This is where it gets tough really fast. For arguemnt's sake, let's say semantic analysis can properly identify patterns well at 95% level (oooooffff already can see problems).
Public/wall comments on any platform are relatively easy to police. But, I argue the vast majority of problematic grooming comes from DMs. Policing DMs seems worse than trying to do hash analysis of private photos.
I don't think there's a win scenario here for anyone involved.
[edit]
> they could check the birthday of people,
yeah because this is answered honestly by predators.