Hacker News new | past | comments | ask | show | jobs | submit login
Snapchat is harming children at an industrial scale? (afterbabel.com)
156 points by Tomte 6 days ago | hide | past | favorite | 137 comments





> “Think it would be interesting to investigate how healthy Snapstreak sessions are for users… If I open Snapchat, take a photo of the ceiling to keep my streak going and don’t engage with the rest of the app, is that the type of behavior we want to encourage? Alternatively, if we find that streaks are addictive or a gateway to already deep engagement with other parts of Snapchat, then it would be something positive for “healthy” long term retention and engagement with the product.”

For I second I thought this employee was talking about what's healthy for the user. Certainly not though; they mean what's healthy for the "user-base". I find very interesting how this sort of language leads to certain employee behaviour. Using the concept of "health" to mean retention and engagement, might overcast thinking about health from a user's perspective— it's similar terminology but very different, and sometimes even opposite, goals.


Bingo. If more people were carefully analyzing language, they could spot earlier that people are on the slippery slope of, lets call it, anti-human beliefs; as then they may help them to correct course.

If we don't, these narratives are getting normalized. A society is on a curve of collective behavior, there is no stable point. Only direction.


GitHub does the same thing with commits, displaying them on your profile. Is that remarkably different than what Snapchat is doing?

I'd say so. Some obsess over their commit history but it is mostly out of the way and only a representation of how active you are. The snapchat streaks are a key feature and designed to keep you coming back every day, you can even pay a dollar to restore it if you miss a day.

Back when I was graduating from Uni, one day I just decided that Snap streaks pressure was too much. I had streaks of 700 days+ with a person I barely talked to. But most of my streaks were with my best friends, people I talked to every day.

It was like a daily ritual, and I couldn't escape it for a while. I decided to go cold turkey, since it felt like the only option. All my friends moaned and complained for a while. They even tried to revive the 'streak' back, but I persisted. Feels really silly when I look back, but 700 days means I was sending snaps everyday for 2 years straight.

I still have the app and there are still few friends of mine, who send me snaps about their whereabouts, but I have stopped using it. Blocking the notifications was one of the best decision that I could have made, since that was the single biggest factor in not opening the app itself.


> Blocking the notifications was one of the best decision that I could have made

I’ve done this for all social media, and more recently deleted all social apps. I’ll go on Facebook sometime through the web browser, mainly for marketplace.

Facebook was the first app I tested disabling notifications on. This had to be about 10 years ago, I noticed they would give me a new notification every 5-10 minutes. I was addicted to checking what the notification as. Usually garbage, and the less I used Facebook the more garbage the notice. Since I’ve stopped using Facebook for anything but marketplace my entire feed is now garbage. The algorithm doesn’t know what to do with me now and its former history.

Having no social apps has been a hard change to get used to. But I feel so much better not feeling like I need to scroll.

I only scroll on hacker news now… which is easy because the top page doesn’t get that many updates in a day, and after several minutes of browsing “new” I’m satiated I’ve seen all I might want to see


So after doing this for 2 years, what were the negative effects other than a few seconds spent each day?

Anyone remember YikYak? I was in university at the time, the explosive growth was wild. After the inevitable bullying, racism, threats, doxxing, that came with the anonymous platform, YikYak enabled geofencing to disable the app on middle and high school grounds.

I think every social media platform with an "age limit" should be required to do this as well. And open it up, so that anyone can create their own disabling geofence on their property. How great would it be to have a snapchat free home zone? Or FB, or tiktok


At my college, someone got kicked out for yikyacking "gonna shoot all black people a smile tomorrow" and everyone quickly realized exactly how anonymous it really was after the guy was found a few hours later.

Thing is, there was a comma between "people" and "a smile" which made his poorly thought out joke read a lot differently. Dumb way to throw away your education.


I don’t understand. The “joke” would be if there was no comma. Putting a comma seems like they wanted to cause panic, and feign ignorance later.

Yes, that's what he tried to argue (it was a joke bro) in the lawsuit that followed, to try to get back in. He lost.

Personally, I think he just flubbed it. At the time, memes like "I'm gonna cut you <line break> up some vegetables" were popular. Can't expect a dumbass edgelord to have good grammar.

Either way, it was a stupid thing to do and he paid for it.


Crazy Smart (;

Edit for clarity: /s - I went to the same university which had the above slogan.


So basically, if he hadn't added the comma, he'd still be at college.

So he got kicked out because of an extra comma, which he added to make it even more edgy, at the cost of reducing plausible deniability to nearly zero.


I’m not sure which college was involved here, but if I were the person adjudicating this, I imagine the outcome would not have hinged on the comma.

Well, without the comma it can be entirely plausibly framed as a nice statement, no?

That is not exactly how disciplinary or legal procedures tend to go. The intention is clear here.

Nope, it isn't. Not without the comma. And especially without context.

I'm being obtuse but I don't see the comma thing making the "joke" come off differently, what am I missing?

The phrase "shooting a smile at someone" means to briefly or quickly glance at someone while smiling. Perhaps "shot a glare in his direction" is more familiar?

Depending on the location of the comma, the speaker is either planning to make happy gestures at people, or killing people with a firearm which makes them happy.


To shoot a smile means to smile at someone. So the pun is that he is going to smile at every black person he sees.

We block a number of online properties including Snapchat and YouTube using NextDNS.

We have different profiles for different devices to allow, for example, YouTube on the television but not on kids tablets or phones.


that's only good for the devices using your internet though no? not if they have data.

I install a configuration profile on their devices which forces NextDNS regardless if they're on my wifi, LTE or their friend's wifi.

https://apple.nextdns.io/


why would you give a kid data? (as in cell data, presumably) I guess, to be able to helicopter them from anywhere...

Apple devices would still have parental controls in that case, though, I think?

Cellphone companies should really step up, here.


Even if they don't have data ... they may use someone else's wifi. NextDNS configuration profiles address this - https://apple.nextdns.io/

oooh, this is good. I am bookmarking this for the future (my son is only 3.9)

One past thread: Thank You, Yakkers - https://news.ycombinator.com/item?id=14223199 - April 2017 (108 comments)

Lots of comments: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Ah, a world where this is taken to an extreme might even bring back the mythical https://en.wikipedia.org/wiki/Third_place rapidly disappearing in the American suburb and city alike... because it becomes the only place in the community where property owners don't geofence to forbid social media use!

https://theweek.com/culture-life/third-places-disappearing

But of course, social media companies will pour incredible amounts of money into political campaigns long before they let anything close to this happen.


Technological solutions to societal problems just don't work.

Some $EVIL technology being fashioned to harm individuals isn't to blame - the companies behind that technology are. You can pile up your geofencing rules, the real solution lies somewhere between you deleting the app and your government introducing better regulation.


By this logic technological “progress” can not cause societal problems?

Which of course it can so why can’t a part of the solution be technological?


It can be, but I think practically it can't be. Maybe that doesn't fit into a nice logical statement, but there you have it. Or: when you build yourself a constantly-accelerating, never-stopping racecar and get on it, it's hard to build a steering wheel or brake pedal for it. Or or: it's a lot easier to get into a deep hole than to get out of one.

Geofencing around schools is the kind of thing you might see if government attempted to regulate this

Don’t we geofence sale of alcohol and tobacco around schools?

I think vending machines dispensing whiskey shooters would be a great addition to any classroom.

People clearly want the product, and I would clearly stand to make a lot of money from it.


You want every (any?) app knowing your exact location at all times? That's not how we "geofence" the sale of physical goods.

I imagine this could be set up on the operating system side. All the apps would receive is a go/no go signal, not fine coordinates

That would be a good start. I guess someone at Apple has already been brainstorming about it for a while. I still think geofencing is a poor bandaid to patch a problem we've created in the first place. Just like notification filtering rules, it's like liquor vendors referring you to addiction therapy.

Or maybe the schools just don't let kids bring phones in at all.

That’s the same geofencing by another authority

> Technological solutions to societal problems just don't work.

Ehhh, that's just a poorly thought out slogan whose "truth" comes from endless repetition. Societal problems can have technical origins or technical enablers. In which case a technical solution might work to make things better.

So no, there's no technical solution to "people being mean to each other," but there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."


> Societal problems can have [...] technical enablers.

That was my point.

> [...] there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."

I've never used (or even heard of) YikYak before, but what solution are you suggesting exactly? De-anonymisation? How would you achieve that? Suppose you have a magical^W technological de-anonymising wand, how would that not cut both ways?

So YikYak enabled geofencing, to alleviate the problem they've caused in the first place? But let's suppose they didn't do that.

How could I, as an average parent trying to protect my child, employ such a solution on my own? Could my tech-savvy neighbor help me somehow? Is there a single person outside of YikYak who can build a solution that any parent could use?


Geo fencing requires constantly sharing location data.

(Since the TikTok post was swapped out with this one, I'll repost my late comment here, since it applies to a lot of companies.)

> As one internal report put it: [...damning effects...]

I recall hearing of related embarrassing internal reports from Facebook.

And, earlier, the internal reports from big tobacco and big oil, showing they knew the harms, but chose to publicly lie instead, for greater profit.

My question is... Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?

* Do they not realize when management doesn't want to hear this?

* Does management actually want to hear it, but with overwhelming intent bias? (For example, hearing that it's "compulsive" is good, and the itemized effects of that are only interpreted as emphasizing how valuable a property they own?)

* Do they think the information will be acted upon constructively, non-evil?

* Are they simply trying to be honest researchers, knowing they might get fired or career stalled?

* Is it job security, to make themselves harder to fire?

* Are they setting up CYA paper trail for themselves, for if the scandal becomes public?

* Are they helping their immediate manager to set up CYA paper trails?


My team at Facebook in the 2010s made many such reports.

We did that work because our mandate was to understand the users and how to serve them.

We did that with full good natured ethical intent.

We turned the findings in to project proposals and MVPs.

The ones that were revenue negative were killed by leadership after all that work, repeat cycle.


Interesting. Any sense whether that system was consciously constructed? (Like, Task a group to generate product changes appealing to users, and then cherrypick the ones that are profitable, to get/maintain profitable good product.)

Or was it not as conscious, more an accident of following industry conventions for corporate roles, and corporate inefficiency&miscommunication?


It was extremely scientifically methodical. Everything is designed from UX and other sources of holistic research. Then validated with the most built-out AB test system you can imagine. Only winners are kept.

Meta is doing this thousands of times per month, all the time.


> Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?

They hire people on the autism spectrum who are inclined to say things out loud without much regard/respect for whether they are "supposed to" say it. *cough* James Damore.


I didn't guess that autism was involved in that case, and I'm a little uncomfortable with something that might sound like suggesting that autistic people might be less corporate-compatible.

There are plenty of autistic people who wouldn't say what Damore did, and there are non-autistic people who would.

I also know autistic people who are very highly-valued in key roles, including technical expert roles interfacing directly with customer senior execs in high-profile enterprise deals.

People are individuals, and we tend to end up treating individuals unfairly because of labels and biases, so we should try to correct for that when we can.


On the contrary, autistic people who don't hesitate to speak uncomfortable truthes are vital to the health of organizations, and society as a whole. You would all be lost without us.

(Note my indifference to your discomfort with my comment.)


In my opinion it's unhelpful to pathologize behaviour like being blunt or speaking your mind. It's just another expression of the impulse to split the world into an in-group and an out-group. I especially agree with the last paragraph of the GP. Doing this may be fun to do when you're making statements like "autistic people are inherently superior in some ways", but it's obviously an issue when some other misguided person makes a statement that I think most rational people would disagree with, such as "autistic people are inherently inferior in some ways". We are all just people.

I do want to note a tangential topic on social media harming children and young adults.

In my personal experience, kids and young adults particularly those who grew up immersed in social media (born after ~1995–2000), seem to struggle with recognizing appropriate, undistorted social cues and understanding the real-world consequences of their actions.

To Snapchat harming kids, I think it is more than just evil people doing "five key clusters of harms".

Even adults often expect the same instant reactions and flexible social dynamics found online, which blinds them to the more permanent, harsher outcomes that exist outside of digital spaces.

Anecdotally, the utter shock that shows on some people's face when they realize this is sad, and very disconcerting. (At an extreme think "pranksters", that get shot or punched in the face, and they are confused why that happened, when "everyone loves it online".)

How to fix this? the suggested solutions will not solve this problem, as it does not fit the "clusters of harms".


The social media business model is predicated on scaling up obvious and huge conflicts of interest. To scales unfathomable a couple decades ago.

Basic ethics, and more importantly the law, need to catch up.

Surveilling, analyzing, then manipulating people psychologically to mine them for advertisers is just as real a poison as fentanyl.

And when it scales, that mean billions of dollars in revenue, actual trillions of dollars in market value unrelentingly demanding growth, playing whack-a-mole with the devastating consequences isn’t going to work.

Conflicts of interest are illegal in many forms. Business models incorporating highly scalable conflicts of interest need to be illegal.

We could still have social media in healthier forms. They wouldn’t be “monetizing” viewers, they would be serving customers.

Facebooks army of servers isn’t required to run a shared scrapbook. All those servers, and most of Facebook’s algorithms and now AI, are there to manipulate people to the maximum extent possible.


This all seems like obvious byproducts of an ephemeral photo based platform. Beyond these, there's also the shitty "explore" feature that pushes sexually explicit content that can't be disabled. Surprised that's not mentioned here.

With both of these articles, are we finally getting to a tipping point with social media and its negative effects on people?

People knew smoking killed for decades. Do you think that with no policy change and no regulation, that Marlboro and Philip Morris would have let their market tank?

Advertising - banned, smoking indoors - banned, and most importantly, taxing the hell out of them (every 10% increase in cigarette prices results in a 4% decrease in adult consumption and a 7% decrease in youth consumption).

There isn't really directly comparable policy to taxing these free social media platforms., however, and the whole thing is a bit stickier. Before any policies can stick, the public needs to be aware of the issues. That is tough when most people's 'awareness of issues' comes directly from social media.


I think part of it is that social media has now been around long enough that it is becoming possible to study the long term effects on our monkey brains from being constantly exposed to the lives and opinions of millions of strangers on a global level.

for sure. but if ANY of that kind of thing gets in the way of profits, well then that's not OK. in capitalism, profit is the only thing that matters. CSAM? drugs? underage use? pfft.

until this country gets serious about this stuff - and don't hold your breath on that - this is the absolute acceptable norm.


Some readers here presumably work at Snap. How do you feel about this and your work? Do you sleep soundly at night?

I don't work for Snap, but they do use some software I wrote, so I guess that's close enough.

I find all of these "social media is bad" articles (for kids or adults) basically boil down to: Let humans communicate freely, some of them will do bad things.

This presents a choice: Monitor everyone Orwell-style, or accept that the medium isn't going to be able to solve the problem. Even though we tolerate a lot more monitoring for kids than adults, I'm still pretty uncomfortable with the idea that technology platforms should be policing everyone's messages.

So I sleep just fine knowing that some kids (and adults) are going to have bad experiences. I send my kid to the playground knowing he could be hurt. I take him skiing. He just got his first motorcycle. We should not strive for a risk-free world, and I think efforts to make it risk-free are toxic.


Pouring the resources of a company the size of Snap into addicting as many kids into their app as deeply as possible is not the same letting them communicate freely. Besides that, I don't know of any parent that would want ephemeral and private communication between their child and a predatory adult. Snap is also doing nothing to shield them from pedophiles, drug dealers, and arms dealers that are using the same app as a marketplace.

The damning part is that these companies know they harm they are doing, and choose to lean into to it for more $$$.

Thanks for your response. Your open source contributions are perhaps less damned than those of an actual Snap employee ;)


Are you not willing to even entertain the notion that communication platforms could influence the way that it's users communicate with each other? That totally ephemeral and private image based social media could promote a different type of communication compared to something like say, HN, which is public and text based? Sure you take your kid skiing, but presumably you make them wear a helmet and have them start off on the bunny hill, I agree that a risk-free world is an insane demand that justifies infinite authoritarian power but there is a line for everyone.

Yes, I make my kid wear a helmet. I make sure his bindings are set properly. I make sure he's dressed warmly. I make sure he's fed and hydrated.

I am the parent. The ski resort provides the mountain, the snow, and the lifts.

He's a bit too young to be interested in taking pictures of his wang but I'd like to think this is a topic I can handle. Teaching him to navigate a dangerous world is sort of my job. I'm not losing sleep over it.


This is about societal level harm. Sure, you do everything right, but most people don't.

I also do everything correctly, but one time a drunk driver still almost killed me.


Every authoritarian wants more power to prevent "societal level harm". I seem to be hearing that one a lot lately.

> I seem to be hearing that one a lot lately.

Oh really? I'd love to hear a few examples.


Well, good luck when he does become a teenager. Many parents thought the same as you up until that point.

> Let humans communicate freely, some of them will do bad things.

That’s just normal phone calls - no one is complaining about those.

But social networks have algorithms that promote one kind of content over another.

I keep getting recommended YouTube videos of gross and mostly fake pimple removal, on Facebook AI generated fake videos of random crap like Barnacle removal, and google ads for an automated IoT chicken coop.

I have never searched for these things and no living person has ever suggested such things to me. The algorithm lives its own life and none of it is good.


You have a very different experience than I do! My Youtube algorithm suggestions are wonderful, full of science and engineering and history and food and travel and comedy and all kinds of weird esoteric things that would never have been viable in the broadcast TV I grew up with. I am literally delighted.

Maybe you're starving the algorithm and it's trying random things? Look up how to reset the YT algo, I'm sure it's possible. Then try subscribing/liking a few things that you actually like.

If you're within a standard deviation or two of the typical HNer, look up "Practical Engineering" and like a few of his videos. That should get you started.


Perhaps, but my point is it's not 'humans communicating freely', the strange thing I see is not my choice.

I thought you had changed the subject to Youtube? Snap is person to person communication, Youtube is broadcast to the public. I don't think Youtube knows who my friends are. I wouldn't call it social media; it's just media.

It makes no sense to group these things together; "youtube leads to sexploitation" is nonsense. What I think I'm hearing is ennui about technology in general, which I can understand, but keep your arguments straight.


Exactly. It's marginal benefit vs marginal harm. Teens can "communicate freely" over text, voice, and video calls, including sending each other photos... TO THEIR CONTACTS.

There is no need for location based recommendations, streaks, nudges, etc. They should be building their social networks in the real world. And if they need friends outside of school, that can come through parentally facilitated activities like sports, clubs, etc. Later you start playing Magic the Gathering at the nerd shop or go to "shows" at the VFW hall.


I’ve worked there, maybe my 2 cents: at the end of the day I have mouths to feed and honestly I used to be idealistic regarding employer moral compass and so on, but coming from the bottom in socio-economic terms I will exercise my right to be cynical about it.

I have some support to the Trust&Safety team at the same period of the whole debate about the section 230; and from what I can tell Snap has some flagging mechanisms quite good related with people selling firearms, drugs and especially puberty blockers.

The thing that I can say is that a lot of parents are sleeping at the wheel with teenagers and not following what is going on with their child.


Each generation of parents fails on something.

This generation is failing at recognizing the dangers of social media.

Teenagers and even children are being radicalized on-line, sold dangerous diets, manipulated by state sponsored creators, lied by companies, taught anti-science, and the list goes on and on.

How is all this not heavily regulated? Even adults need protection from scammers, fake products, misleading ads, hidden product promotions that look like personal opinions...

We have gone back a 100 years when it comes to consumer rights, and children are the ones that are paying the highest price.


As a parent, I never failed to recognize it.

I just failed to be able to do anything about it.

You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time. Or at least saw that in your friends.

This is a society wide thing. Parents are pretty much powerless.

So yes, regulation. But you'll see how discussion of any proposal for this goes down in this forum. Just imagine across the whole polis.


Genuinely asking - is it impossible to just enforce a no phones until 16+ rule with your kids? The reasons against it I see are either “it’s too hard for the parents” or hypothetical (“they would have no social life”). There were tonnes of things I wanted to do as a teenager that my parents prevented me from doing. Including things my friends were allowed to do by their less strict parents. There was of course things I did despite them but phones seem like a simple one for parents to control given teenagers can’t afford them otherwise until they start working at 16+. Allowing instant messaging via a computer seems like a nice middle ground.

I would have strongly agreed with you if we were talking ten years ago but with everything using two-factor authentication these days it pretty much a requirement to have a phone. Even for children to do school work.

Like there are parental control systems and all that you could set up but that requires you to be pretty tech savy as a parent. I think you are already doing great if you keep your child away from phones and tablets until they are of school age but keeping teenagers away from smart phones seems very unrealistic if you don't live in a remote commune or something.

I really, really wish it weren't the case.


Only if you're willing to ban them from ever going to friends' houses, where they'll use their friends' devices to do it.

> where they'll use their friends' devices to do it.

That'd already be much, much better than using it at every possible moment.

Why do people just give up proactively? Yes, you can't prevent it 100%, but you can still try to restrict it as much as possible.


> Why do people just give up proactively?

Because we're up against trillion dollar companies that employ armies of experts with the goal of inducing addictive behavior. We're deeply outgunned.

Because kids have a genuine need for socialization, and being the one without a phone means you just don't get invited to shit. Birthday parties, hangouts, random trips to the ice cream shop.

Because kids are smart. I'm very technical - I had a pfSense firewall, Pihole, and Apple's screen time on my kids' devices. They found ways around that within hours; kids at school swap VPN/proxy instructions and whatnot.

Because kids these days get a school laptop, on which I have zero admin rights.

Because I don't want to be a jail warden, I want to be a parent.


Yes, I understand all of that. What I meant was: refusing smartphones as long as possible. For example, as long as only ~50% of your kid's friends have a smartphone, it should be possible to still resist. Just don't be one of those parents who (unknowingly) help create the problem in the first place by succumbing to Big Tech on the first occasion.

A cell phone is available 168 hours a week. A friend's phone might be available, say, 10% of that?

Friend gets a new phone, gives you the old one. Neighbor has open wifi. Hide it deep in the giant pile of laundry in your bedroom.

Whack-a-mole is fun at an arcade. It's not fun when it's your kids.


That's still significantly better than having it available at the dinner table, no?

The goal of parenting is to raise good kids. Unfortunately, it's not always going to be fun.


We don't permit phones at the dinner table, no. Nor in the bedroom.

But we've learned things like "no Snapchat at all" make for a social pariah, which is frequently worse than the problem it's trying to solve.


We can always take the phone away. As a parent of a teenager, sometimes I have to make hard choices. This is one of them.

Kids don't need cellphones. We want them to have one often because of our own insecurities.


> We can always take the phone away.

Kids are… resourceful.

https://www.cbsnews.com/news/teen-goes-viral-for-tweeting-fr...

Last week, a 15-year-old girl named Dorothy looked at the smart fridge in her kitchen and decided to try and talk to it: "I do not know if this is going to tweet I am talking to my fridge what the heck my Mom confiscated all of my electronics again." Sure enough, it worked. The message Dorothy said out loud to her fridge was tweeted out by her Twitter account.

(And before that, she used her DS, her Wii, and a cousin's old iPod. There's always a friend's house, too.)


I'd posit that social media restricted-solely-to-a-fridge is still significantly less harmful than social media literally-always-within-arms-reach.

This was the best they could come up with on short/no notice.

$50 (or a hand-me-down from a friend) will buy you an Android burner phone that can hop on the neighbor's wifi.


Confiscate the hell out of it. That's what parenting is for. How much money is a kid going to spend on burner phones before deciding to just stop bringing them to the house?

> Confiscate the hell out of it.

People in prisons manage to conceal contraband (including cell phones) in their cells, and they have substantially fewer hiding spots.

Turning your house into a prison with random room tossings has consequences, too.


I don't really understand what you're arguing for here. Obviously prisons understand they can't catch everything, but they try anyway because it's still better than letting prisoners bring in whatever they want.

They try, and they fail comprehensively, and that's despite being very willing to do things that would be extremely clear child abuse if I tried them on my kids.

The prison warden doesn't care if the prisoners love him 20 years from now.


I ran a secret Ethernet cable to the router to circumvent parental Internet restrictions, that was in the early 2000s. Teenagers will be teenagers.

My dad took away my PC when I got bad grades and put it in his room.

I took the mobo, CPU, RAM, hard drive and PSU out of the case, put them in my backpack and went to my friends house. He never noticed.

That said, I still couldn't use the PC when I was at home. Physically taking away the machine wasn't really the punishment.

This would apply to cell phones and such too. Sure they might figure out some workaround that works sometimes, but it won't be the same.


> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.

Actually, I remember the opposite. I had problems with screen time so my parents put a password on the computer. It wasn't 100% effective, of course, but it was closer to 90% than 0%.


> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.

There might be bias here if one remembers one's own teenage years, because I'm sure many teenagers _think_ their parents don't have influence over them. If you ask the parents though I'm sure many would agree aren't fully in control, but do notice they have a lot of influence still.

Personally, the older I grow, the more I realize how much influence in general my parents actually had over me.


I want to add it is important to show that you are against those things as well, too many people react by shifting blame when they stand to gain more by saying, "Yeah, I don't like that either."

Regulation of social media probably polls pretty well, I think polls have even found that most high schoolers want to reduce or end their usage of it

Phone use during class time is banned in my kid's high schools.

Makes no difference -- it's completely unenforced by the teachers. They're practically physically adults, teachers don't want to risk the confrontation, etc. And the kids suffer for it.

And my youngest uses no social media but their mind is still eaten by constant phone usage.

More than social media, the problem is the device. The form factor.

The "smartphone" is a malevolent technology.


Petition to build Faraday cages into every public school classroom in the country

Phones are banned on school ground here and its working. My kids have never been allowed social media here at home, and they don't see friends doing it because phones are not allowed at school at all.

Neither give a shit about their phone and we have to force them to take it if they are going out so we can call them if we need them.


>> How is all this not heavily regulated?

It isn’t properly regulated because the CEO’s and founders just moan that it isn’t possible to regulate so much user generated content. I’m of the opinion that, in that case, their sites shouldn’t exist but people seem to have convinced themselves that Facebook et al provide too much value to stand up to.


> I’m of the opinion that, in that case, their sites shouldn’t exist

I totally agree with this.

If,for example, hydrogen cars exploded all the time that will not be a reason to not regulate them but a reason for a complete ban.


> CEO’s and founders just moan that it isn’t possible to regulate

Surely if a CEO with a billion dollar budget can’t regulate it, neither can a parent?


A parent only needs to regulate their child. Not 25 billions daily posts.

It sounds like extremely convenient offloading of responsibility for a toxic product

I'm curious as to when was the generation that failed to recognize the dangers of invitingly large (treasure) chests. :3

(I finished watching the last episode just as you posted this comment, still giddy about it. :D

I tried to spread out watching the season for the first time over more than a week, and failed miserably...)


What exactly do you want relegated? What powers do you want Trump to have to control the speech of Americans?

There was a related thread on the front page: TikTok is harming children at an industrial scale - https://news.ycombinator.com/item?id=43716665

Since that article is several months old and this one is new, we swapped it out. I assume it makes more sense to discuss the new one. Also, there were lots of criticisms of the other article for supposedly focusing only on TikTok, and those criticisms seem supplanted by this piece. (I'm not arguing whether it's right or wrong, nor have I read it.)


You can essentially just wildcard the social network name and everything still applies. That's the status quo

Except FB, which mostly harms the middle aged.


It was harming kids on an industrial scale back when it was new, before Instagram et al cannabalized their audience

Was it? In Facebook’s early days you actually followed your friends and only saw their content. There wasn’t even an algorithm until a few years in when they stopped showing the feed chronologically. It wasn’t perfect but it was largely just an extension of your IRL social life.

Getting into the limits of my memory here, but as far as I recall, early Facebook didn't have a feed at all, chronological or otherwise. It was just a directory of students at your own school, skeuomorphic to the physical "facebook" that universities would hand out each semester to students on campus, which gave you a headshot of everyone along with their room numbers. At some point, they added an updateable "status" field to the profiles, to tell your friends how you were feeling that day or what you were doing or whatever. When they started showing those on the home page instead of just on the profiles, then there was a feed, which eventually transformed into the monster we see today.

But early on, it was just a digital phonebook with headshots and exactly equivalent to physical items that schools already distributed.


Yes. Early FB was a completely different application and pretty similar to MySpace.

Would generally disagree here. Especially when limited to edu emails, it was focused on human connections. Even after it opened to broader audience, it was centered on explicit connections you already had (or to some limited extent discovering new ones through network effects).

Now whether social networks in even these basic forms are harmful (discouraging physical connections, isolation in digital environments, etc), is maybe a different topic.

Exposure to echo chambers of harmful, hateful content driven by algorithms seems to be more the focus here. MySpace, early FB, or even AIM/ICQ, and others focused on facilitating connections and communication didn’t drive the same level of harm imo.


The same outlet did the TikTok story:

Following the format of our previous post about the “industrial scale harms” attributed to TikTok, this piece presents dozens of quotations from internal reports, studies, memos, conversations, and public statements in which Snap executives, employees, and consultants acknowledge and discuss the harms that Snapchat causes to many minors who use their platform.


There is a statistic that the average teenager gets 240 smartphone notifications a day: https://www.michiganmedicine.org/health-lab/study-average-te...

Young people have more time ahead of them than anyone. Consequently, in my opinion, young people should be receiving information with a long time period of usefulness. Smartphone notifications have a very short half-life.


Does anyone remember the Hacker News thread last week about Black Mirror?

https://news.ycombinator.com/item?id=43648890

Many in the comments were criticizing Black Mirror for being unrealistic. Especially in Black Mirror’s assumption that negative technologies would be introduced into society and ruin people without folks realizing.

Well…Snapchat is basically a Black Mirror story. It was introduced and became widespread without much debate. The negative effects are happening. We know of them. Nothing happens. So the Black Mirror criticizers were wrong.

“You best start believing in Black Mirror stories Mrs Turner. You’re in one!”

And so are the rest of us. Look around you and tell me the world isn’t a Black Mirror episode.


I take the opposite viewpoint as the criticisers -- they're too real, too foreseeable, that I would almost ask the Black Mirror writers not to give "them" any more ideas.

It's the same problem as Charles Stross wrote about in "Don't Create the Torment Nexus":

https://www.antipope.org/charlie/blog-static/2023/11/dont-cr...

Discussed on HN in 2023, with 392 comments: https://news.ycombinator.com/item?id=38218580

The question is whether you want Black Mirror producers or SciFi authors to continue generating art and entertainment. Those have value to people with literary comprehension, but they might also be misinterpreted by people who believe them to be a roadmap. My fear is that by shifting the medium from novel to TV show, you're removing the slight filter that keeps out those with insufficient literacy to sit down with an interesting 400-page paperback and opening it to those who can press "Play".


How is Snapchat a black mirror episode? Do you think even 10% of Snapchat users are harmed in the ways discussed in this article?

This is like saying we are living in Dune because we have some people in space.

So just because some people are harmed in society suddenly black mirror is not too on-the-nose or unrealistically pessimistic?


Yes. A large fraction of Snapchat's users are significantly harmed.

First hand, I see it all the time in students. There's an extreme unhealthy obsession with social media that leads to serious inferiority complexes and depression. All of this wrapped in algorithms that compel people to participate in various ways, from streaks to points, etc.

Quantitatively, everything from anxiety to depression to suicide has more than doubled in teens.

Oh heck, forget about teens. I see it in plenty of adult groups, like mothers. There's a major pressure from others to keep up, serious self-doubt for normal setbacks, unrealistic expectations around even mundane things.

Social media is black mirror, and we're doing it to ourselves.


> Social media is black mirror, and we're doing it to ourselves.

You mean black mirror is a pessimistic exaggeration on the state of society and technology. It’s not the other way around. What you’re observing is not profound, it’s literally how the writers approach their process for the show.

In fact, you’re doing this weird thing where you make it seem like black mirror was prophetic and it came before all the observations about tech and society, when it was clearly the other way around.

The criticism from the thread you’re referencing is that their approach is too on the nose and the villains are cartoonish. There’s no subtlety or even anything interesting anymore in the latest seasons. A critique on software subscriptions? We’ve been doing that since it was invented.

Those are fair criticisms.

What’s missing from black mirror, this article, and your perspective is how much social media has benefited everybody. How many jobs has it created? How many brand new careers and small businesses exist only because of social media? It’s an entire economy at this point. The good and bad effects of democratization of information dissemination.

There’s hardly an interesting analysis or critique of the actual current state of tech & society because you’re out here looking for the bad and ignoring the good. Much like black mirror is doing. Its main goal is to be as shocking as possible. That’s why in the thronglets episode, which I did enjoy, there was so much pointless gore. Yes, the point was that the throng had to see what humans are capable of, but there’s no reason to show all the gore associated with drilling through your head or dismembering a dead body. All of that is bottom of the barrel shock value stuff, which is ultimately what black mirror has devolved into.


What are you responding to?

Children committing suicide at twice the rate is bad. Childhood depression at twice the rate is bad. Declining scores on every metric of well-being and attainment is bad.

I'm ignoring the good?!

No. When kids that I know self harm at alarming rates because of social media, I'm not ignoring the good.

You're prioritizing some abstract nonsense over the actual people who are suffering.


Your defense of social media seems to be that the jobs it has created outweigh the horrible things it has done to many of the young people in society.

Some of us apparently apply very different weighting to the two sides and come to a different conclusion on the efficacy of social media.


I’m not defending social media. I’m talking about how there’s no nuance in ops perspective, black mirror, or the article. It only highlights the negatives and that’s all there is. Basically nobody is looking at the positives. If you’re going to do a societal harm analysis you should probably consider the benefits too before coming to a conclusion.

But to your point about young people in society, this feels like a classic “but oh, isn’t anybody thinking about the children” moment. https://en.m.wikiquote.org/wiki/Think_of_the_children

It’s a logical fallacy. If we are simply thinking about whether any of society is harmed we might as well just do nothing at all and cease to exist. Nobody in this thread is willing to engage and sincerely discuss the benefits vs the harms.


You're welcome to present the benefits. I can only speak for myself though: they don't amount to a thing worth otherwise poisoning society for.

Other than people that already agree with you, I'm not sure who you are appealing to by suggesting others are caught up in "think of the children".

I bring children up because studies seem to focus on the negatives of social media on children in particular. Also I raised three children and watched social media play out in their lives.


Yeah, it would be fair more valuable to have the villains everyday folks like you and me just trying to make a buck, too busy or selfish to see the implications of of the software they make.

That doesn't really track.

Most technologies in Black Mirror are fully implemented as-is, usually with clear and prescient knowledge of the downsides known and suppressed by the owner of the technology.

Snapchat is not that. It started out as an innocent messaging app and slowly mutated into the monster it is after it was already widely adopted.

The criticism of Black Mirror is that it's presented as immediate widespread adoption of the new Torment Nexus 5000, which was always intended to be a force of evil and suffering. Everyone knows exactly what the torment nexus is and willingly accepts it. Snapchat only became a torment nexus after it was established and adopted, and was done this way maliciously.


Did some work with researchers at a local university and found out that Snapchat is like the #1 vector for production and distribution of CSAM. Same thing when it came to online grooming.

my guess is anywhere kids are will be that

> We suggested to them some design changes that we believe would make the platform less addictive and less harmful: [...] 5. Stop deleting posts on Snap’s own servers.

Can someone say the original intent or de-facto use case of Snapchat, and how that's changed over time?

Around the time it started, I heard that it was for adult sexting, with people thinking they could use it to send private selfies that quickly self-destruct. So that (purportedly) the photos can't be retained or spread out of the real-time person-to-person context in which they were shared. (I guess the ghost logo was for "ephemeral".)

And then I vaguely recall hearing that Snapchat changed the feature, or got rid of it.


Trigger warning for descriptions of ruined teenager lives (including up to death), complete with happy mugshots of "before".

(Some things are worth getting disturbed by though.)


Hi @dang,

Sorry to hijack this thread with a completely off-topic issue, but I have no idea where else to reach about this. I did a submission yesterday showcasing the work of some of my colleagues at UofT, it's satire but it is backed by serious academical work. I was very sad to see it quickly got flagged and removed from the front page when it started to generate discussion. I just wanted to ask you to unflag it or provide an exlaination as to why it should remain flagged and is breaking the guidelines, as I believe censoring/muting academics on important topics such as AI in the current political climate is yet another dangerous step towards fascism.

The submission in question:

https://news.ycombinator.com/item?id=43704319

Thanks for listening to my plea, and again apologies for being so off-topic!

Best,

n

Edit: formating/typo for clarity


Please email hn@ycombinator.com with questions like this.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: