Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the new norm... expect all services to require phone numbers due to spam and propaganda.


I have a really split opinion about this. On the one hand I've seen a lot of really hateful stuff lobbed around on sites like Twitter, and I suspect that linking accounts to phone numbers would dramatically reduce that. On the other hand, I'm not sure I want Twitter (et. al.) knowing my phone number.


I think that at this point FB has proven that people will be nasty regardless of how not-anonymous they are.

As such I'm doubtful this will change anything about peoples behavior.

It's also quite scary how nonchalant many people here are arguing for this to stop "propaganda" which these days seems to be as easily defined as "Anything that doesn't conform with a Western/US-centric narrative".

Because I have yet to see one of these "propaganda ban waves" be reasoned with anything but "Russia/Iranian/Chinese propaganda" like that's the only kind of "propaganda" that exists [0].

As such I consider these "propaganda bans" just another exercise in propaganda [1].

[0] https://washingtonsblog.com/2014/07/pentagon-admits-spending...

[1] https://en.wikipedia.org/wiki/Falsehood_in_War-Time#Summary


This is bit of hypothetical, but does some sort of pki-like scheme exist that would allow me to hold a certificate (of sorts) from an authority that I could use to prove myself to an service, but also simultaneously would not leak any information about me to that service? Similarly service a and service b should not be able to link accounts behind my back. Sounds like a interesting crypto problem


There'd need to be some central authority (like you said) doing the certificate issuing that verifies that you are, indeed, a human, and that your a unique human that they haven't already given a certificate to.

Chances are they'd want more than a phone number; probably a photo ID or something as well, else their value proposition isn't very strong.

I'm not sure if that's better for privacy and safety than many services asking for just a phone number (which I can generate a semi-throwaway Google Voice number for).


I agree. I would rather pay them some fee. But they don't seem to want to monetize their service via user fees.


Oh my god, paid Twitter would be amazing.

Well, then again, something awful was a paid forum iirc, and it was still a den of villainy.


But on Something Awful, it was a feature, not a problem.


Seems like it would make a lot of sense for them to offer both options; either would discourage bots, and you'd give people the choice whether to go the free route or the privacy-conscious route


Surely a phone number is no more identifying than a credit card payment in the age of KYC.

And accepting a cryptocurrency would probably have little impact on all the lucrative scams that get spammed around Twitter.


I like it.

I've grown suspicious with many accounts on Twitter. Maybe I can learn to trust more.


Sad but true. At the hosting company I work for, we had to start requiring phone numbers for cloud servers due to a massive wave of abuse. It's one of the few roadblocks that helps at least somewhat.

Previously, we didn't even require a full name. A few idiots always ruin it for everyone.


A few idiots always ruin it for everyone.

Bad attitude. They're not idiots, they're abusers. Ruining it for everyone else is a feature, not a bug, although if they have been using a platform successfully they may be sad about their scope for abuse being curtailed. Also, consider that punishing everyone because of the actions of a few is the overreaction of someone who isn't willing to understand the problem and just wants it to go away.


How would you solve hosting fraud, then?


You seem to ignore that this constrains abusers as much as it does legitimate users. Collective punishment in order to mitigate abuse by a few is the wrong way to go.


Easy to say until you're running a service that's a target of said abuse. Do you have a better solution?

If you do, it'd be quickly adopted because no one likes adding unnecessary friction.


Invest in your content moderators and in tools to track and trace them. Very few firms seem to have any interest in how or why they are selected by abusers or in the dynamics of the abuse that takes place on their platform. People pointing out abuse are usually treated as an annoyance, when in fact they may have considerable knowledge about the bad actors exploiting the service.

A very low-cost approach suitable for a small firm would be that if someone is abusing your platform, you expose their account history.


What if it's not a content site? But abuse of "free" resources/trials?


Then you can adjust the meaning of my comment to encompass that. I'm not trying to describe all conceivable use cases.


This is, you might say, not just an important problem in society, but the only problem in society.

"Why do I need a driver's license? It's just bureaucracy and a revenue collection scam." Except, when you don't test drivers or provide a mechanism for taking bad drivers off the road, a few bad people spoil it for everyone.

And so on, and so forth. That is not a justification for any one thing like this, but the general principle is that when the bad actors make things toxic enough for the mainstream users, somebody has to step in, or a social platform quickly degrades until it becomes 4Chan, or Gab, or whatever.

Same reasoning behind moderation here on HN.


> "Why do I need a driver's license? It's just bureaucracy and a revenue collection scam." Except, when you don't test drivers or provide a mechanism for taking bad drivers off the road, a few bad people spoil it for everyone.

This is an atrocious analogy. The reason we require licenses for motor vehicles is that they are very dangerous pieces of machinery that can easily do fatal damage to car occupants and pedestrians, as well as property damage. Likening such a domain with that of communication and speech (what we're discussing here) is ridiculous.


Think through this analogy a little more thoroughly. Freedom of speech is an important issue precisely because it’s dangerous. Speech can ignite revolutions against unjust tyrants, and speech can also mobilize hate and terrorism.

Speech is not without consequence to society. If it was not dangerous to the lives and property of others, it wouldn’t matter so much.

I think the argument that speech is less dangerous than the right to drive a car is naïve and uninformed by both history and what we see in plain sight.

I mean seriously, can you look at white supremacist terrorisms radicalized online and tell me that speech has no consequences?

Of course it has consequences. If speech didn’t have consequences, it wouldn’t be worth defending.

———

But even if you refuse to accept that speech is dangerous, you must accept that it has consequences, that it can affect the experience of other people.

If it didn’t, there wouldn’t be a need to moderate speech on this very platform. Everyone could post anything they like. It would be more like... Maybe the right to park your car on a busy street during rush hour.

Nobody will slam into your car, but it will certainly affect their use of a common resource.

Unrestricted use of a common resource leads to a tragedy of the commons, and nobody ends up enjoying it except the vermin, who reduce each other’s enjoyment to the barest minimum.


There are two ways of dealing with the issue. You can default-deny, like only allowing people to drive after a test, or you can default-allow, like just like anything else.

We usually use default-deny only where the severity of bad behavior is very high. That's because it has a high cost for both most people and the test-issuers, and it has a very high cost to the few people caught as false positives. It is a very damaging mode for society. We are also migrating into only using default-deny on the internet, even on consequence-less contexts, and the previous paragraph still applies.

We may get a better world if we take some of the privacy away from the network level, we may even get to keep more of it overall.


Well, it works, and it's a minor annoyance at most for our legitimate customers.

Abusers, on the other hand, have to burn a phone number on each account that gets locked.


I think you don't get it. It's not about annoyance. It's about the complete unreliability of any online service today. All and every customers show (and should rightfully show if they don't yet) complete distrust for a good reason.

You are asking my phone number today and next day I will find it out in the open because of your and others' businesses don't give a .... when it comes to security.

And don't tell me that's only a minority or the exception. Because that's just simply not true.

500px, Quora, Facebook, Twitter, Equifax among others all have been hacked at one point or have been exposed as unreliable and untrustworthy. It's just simply not a logical proposition to trust any online platform with private and/or sensitive data.


We do care about security and hash the phone numbers after sending the verification SMS (we only need to determine whether a given phone number is associated with a locked account - a hash is good enough for that).

Our problem is that criminals open hundreds of accounts with fake data and stolen credit card data, abuse our services until we get abuse complaints or detect it and lock them, then repeat that. This leads to legitimate customers suffering from bad IP reputation and is expensive to clean up.

Requiring phone numbers and blacklisting known throwaway providers has been extremely effectively in preventing this, without generating complaints from our legitimate customers. We don't want to use browser fingerprinting or other intrusive mechanisms for detecting sybil registrations.

What else do you suggest we do?


I assume by "propaganda" what you mean is "deplatforming".

Much easier to keep people who emit thoughtcrime off of your platform if they have to keep getting new mobile numbers each time they are banned.


Or, you know… propaganda.


When the NYT is cheer-leading for the next foreign war, will they get kicked off Twitter to? I sincerely doubt it.

America's newspaper of record has a track record of being wrong during the onset of wars, parroting whatever Washington tells them is true, then eating crow much later and apologizing for. The most recent case was the aid truck fire in Venezuela, which it took them weeks to correct. You've also got WMDs in Iraq that never existed, and their parroting of the Nayirah testimony in 1990, only to admit two years later it was fraudulent.


or you know, algorithmically prevent their posts from showing up in front of people


Does Twitter have a right to police their platform or not?


Companies shouldn't be regulated at all ever except when it comes to letting racists spout hate. Then they should be regulated into requiring that. Plus harassment and other bad behaviours.


Racist hate is still legal speech.


You'd have to agree that the people affected most by this change will be bot networks (Russian IRA, etc).

I doubt their goal is to keep "right wingers" they ban off there platform.


True, requiring phone number are to prevent sybil attacks and are for accountability.


Phone numbers are extremely easy to obtain. This won't achieve anything in the long run.


Can I have 100 phone numbers for free? I don't believe it is so easy.


The same way we have the robocall situation (thanks to shady telcos that turn a blind eye), the same way this system will be bypassed with similar telcos offering massive blocks of numbers at very cheap prices per unit (Twilio numbers seem expensive at small scale, but I guarantee you will get those prices down if you commit to getting like a thousand of those or so at once).


What you're saying is, this has increased the cost of creating a Twitter spam account from practically zero cents to... what, 50 cents an number? A few dollars per number?

That's a huge leap, and it sounds like requiring a phone number is a great way to increase the cost of spamming.


Especially considering your average user already has a phone number and its basically free to verify. It only has a cost for spammers.


It has a non-negligible privacy cost.


Which is almost funny, because phone numbers have been so devalued by marketers.


spam and propaganda

When they tell you it's not about the money... it's about the money.

When they tell you they want your phone number for anything other than making more money... it's about the money.

Edit: -4, huh? Really? Have people forgotten what Facebook just did? https://www.eff.org/deeplinks/2018/09/you-gave-facebook-your...


???

Of course it's about the money.

I'm not the CFO at Twitter or anything, but even I can see that spam and propaganda cost those guys a lot of money. The number of advertisers who stop paying Twitter because of spam will be orders of magnitude larger than the number of advertisers who stop using Twitter because of phone numbers.


I think you know what I mean.


Maybe I don't? Because that's what I thought you meant.

I thought you meant Twitter is trying to put the squeeze on the spammers and propaganda ministers because they will lose advertisers if they can't?

Did you mean something else?


Did you read the link to the EFF page I added to the post? Why should Twitter be considered any more trustworthy than Facebook when they ask me for something they don't need to know?

They can avoid being victimized by spam and propaganda some other way... preferably some other way that I couldn't trivially defeat by giving them the number of a throwaway SIM card or a public phone booth.


How else would you suggest they combat spam? How do you receive a verification SMS on a public phone booth?

Most actual humans have a phone number, and Twitter wants a semi-1:1 mapping between human and Twitter account. Spammers have hundreds. This seems like a reasonable way to greatly increase the cost of making accounts for spammers.


How else would you suggest they combat spam?

Disable the account temporarily when some (small) number of other users flags its posts as spam. If a user is discovered to be filing false spam reports, disable that account. Accounts without a history of posting legitimate tweets should be rate-limited in both their posting and reporting privileges.

Externalizing the costs of fighting spam and "propaganda" (whatever that is) by demanding irrelevant personal information from all users is not the answer... at least, it's not the answer to those particular questions. It's better to empower users to build the trust necessary to solve the problem themselves.


>Disable the account temporarily when some (small) number of other users flags its posts as spam...

I'm not sure you're really hearing us.

The advertisers don't want their ads connected to spam or propaganda posts AT ALL. If Twitter actually shows the post, and then has ads along side it, it's too late. I mean it's great that someone bothers to flag that post as spam, but the advertiser's Twitter firehose processor is going to detect that pairing. Under your proposed regime the advertiser would be constantly detecting violations by spam users who were not being punished by Twitter. Under Twitter's proposed regime, the advertiser would report the first violation and the user, and his/her posts, would be gone. On top of that, there would be far fewer occurrences of such matches in the firehose data in the first place because accounts would be more difficult to create. Now add to all that the fact that the spammer would have to get a new burner phone to create a new account. That cost, coupled with the fact that each account could only pull off limited spamming means less profit for the spammer.

Put another way, the ROI of each new account tends toward zero under Twitter's model. Under your model, the ROI is bounded only by chance. That chance being the chance that enough people bother to mark the post as spam. Here's the thing though, what if they don't? What if the first Twitter hears about the spam is from the advertiser? Being in that situation is what Twitter is trying to keep to a minimum. That is the nightmare scenario that they live in today. Today they are in that situation several hundred times per week. Those are uncomfortable calls. (Probably hundreds per day by now? I haven't checked in a while.) With their new system, over time, I could see that going to ten to a hundred a week. (Maybe even lower if you add automated firehose processing for advertisers on the backend.)

They say "The customer is always right." Well, for Twitter, the customer is the advertiser.


The advertisers don't want their ads connected to spam or propaganda posts AT ALL

Then they will need to get used to disappointment, just like the rest of us. What they're asking for -- and what Twitter is promising -- is not reasonably achievable without fundamentally changing the nature of the service.

Today they are in that situation several hundred times per week.

TWTR has a $25 billion market cap, which they achieved with their current terms of service. I'm sure I have a violin small enough to play for them around here somewhere, but my scanning electron microscope is in the shop.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: