> “Think it would be interesting to investigate how healthy Snapstreak sessions are for users… If I open Snapchat, take a photo of the ceiling to keep my streak going and don’t engage with the rest of the app, is that the type of behavior we want to encourage? Alternatively, if we find that streaks are addictive or a gateway to already deep engagement with other parts of Snapchat, then it would be something positive for “healthy” long term retention and engagement with the product.”
For I second I thought this employee was talking about what's healthy for the user. Certainly not though; they mean what's healthy for the "user-base". I find very interesting how this sort of language leads to certain employee behaviour. Using the concept of "health" to mean retention and engagement, might overcast thinking about health from a user's perspective— it's similar terminology but very different, and sometimes even opposite, goals.
Bingo. If more people were carefully analyzing language, they could spot earlier that people are on the slippery slope of, lets call it, anti-human beliefs; as then they may help them to correct course.
If we don't, these narratives are getting normalized. A society is on a curve of collective behavior, there is no stable point. Only direction.
I'd say so. Some obsess over their commit history but it is mostly out of the way and only a representation of how active you are. The snapchat streaks are a key feature and designed to keep you coming back every day, you can even pay a dollar to restore it if you miss a day.
Back when I was graduating from Uni, one day I just decided that Snap streaks pressure was too much. I had streaks of 700 days+ with a person I barely talked to. But most of my streaks were with my best friends, people I talked to every day.
It was like a daily ritual, and I couldn't escape it for a while. I decided to go cold turkey, since it felt like the only option. All my friends moaned and complained for a while. They even tried to revive the 'streak' back, but I persisted. Feels really silly when I look back, but 700 days means I was sending snaps everyday for 2 years straight.
I still have the app and there are still few friends of mine, who send me snaps about their whereabouts, but I have stopped using it. Blocking the notifications was one of the best decision that I could have made, since that was the single biggest factor in not opening the app itself.
> Blocking the notifications was one of the best decision that I could have made
I’ve done this for all social media, and more recently deleted all social apps. I’ll go on Facebook sometime through the web browser, mainly for marketplace.
Facebook was the first app I tested disabling notifications on. This had to be about 10 years ago, I noticed they would give me a new notification every 5-10 minutes. I was addicted to checking what the notification as. Usually garbage, and the less I used Facebook the more garbage the notice. Since I’ve stopped using Facebook for anything but marketplace my entire feed is now garbage. The algorithm doesn’t know what to do with me now and its former history.
Having no social apps has been a hard change to get used to. But I feel so much better not feeling like I need to scroll.
I only scroll on hacker news now… which is easy because the top page doesn’t get that many updates in a day, and after several minutes of browsing “new” I’m satiated I’ve seen all I might want to see
Anyone remember YikYak? I was in university at the time, the explosive growth was wild. After the inevitable bullying, racism, threats, doxxing, that came with the anonymous platform, YikYak enabled geofencing to disable the app on middle and high school grounds.
I think every social media platform with an "age limit" should be required to do this as well. And open it up, so that anyone can create their own disabling geofence on their property. How great would it be to have a snapchat free home zone? Or FB, or tiktok
At my college, someone got kicked out for yikyacking "gonna shoot all black people a smile tomorrow" and everyone quickly realized exactly how anonymous it really was after the guy was found a few hours later.
Thing is, there was a comma between "people" and "a smile" which made his poorly thought out joke read a lot differently. Dumb way to throw away your education.
Yes, that's what he tried to argue (it was a joke bro) in the lawsuit that followed, to try to get back in. He lost.
Personally, I think he just flubbed it. At the time, memes like "I'm gonna cut you <line break> up some vegetables" were popular. Can't expect a dumbass edgelord to have good grammar.
Either way, it was a stupid thing to do and he paid for it.
So basically, if he hadn't added the comma, he'd still be at college.
So he got kicked out because of an extra comma, which he added to make it even more edgy, at the cost of reducing plausible deniability to nearly zero.
The phrase "shooting a smile at someone" means to briefly or quickly glance at someone while smiling. Perhaps "shot a glare in his direction" is more familiar?
Depending on the location of the comma, the speaker is either planning to make happy gestures at people, or killing people with a firearm which makes them happy.
Ah, a world where this is taken to an extreme might even bring back the mythical https://en.wikipedia.org/wiki/Third_place rapidly disappearing in the American suburb and city alike... because it becomes the only place in the community where property owners don't geofence to forbid social media use!
But of course, social media companies will pour incredible amounts of money into political campaigns long before they let anything close to this happen.
Technological solutions to societal problems just don't work.
Some $EVIL technology being fashioned to harm individuals isn't to blame - the companies behind that technology are. You can pile up your geofencing rules, the real solution lies somewhere between you deleting the app and your government introducing better regulation.
It can be, but I think practically it can't be. Maybe that doesn't fit into a nice logical statement, but there you have it. Or: when you build yourself a constantly-accelerating, never-stopping racecar and get on it, it's hard to build a steering wheel or brake pedal for it. Or or: it's a lot easier to get into a deep hole than to get out of one.
That would be a good start. I guess someone at Apple has already been brainstorming about it for a while. I still think geofencing is a poor bandaid to patch a problem we've created in the first place. Just like notification filtering rules, it's like liquor vendors referring you to addiction therapy.
> Technological solutions to societal problems just don't work.
Ehhh, that's just a poorly thought out slogan whose "truth" comes from endless repetition. Societal problems can have technical origins or technical enablers. In which case a technical solution might work to make things better.
So no, there's no technical solution to "people being mean to each other," but there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
> Societal problems can have [...] technical enablers.
That was my point.
> [...] there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
I've never used (or even heard of) YikYak before, but what solution are you suggesting exactly? De-anonymisation? How would you achieve that? Suppose you have a magical^W technological de-anonymising wand, how would that not cut both ways?
So YikYak enabled geofencing, to alleviate the problem they've caused in the first place? But let's suppose they didn't do that.
How could I, as an average parent trying to protect my child, employ such a solution on my own? Could my tech-savvy neighbor help me somehow? Is there a single person outside of YikYak who can build a solution that any parent could use?
(Since the TikTok post was swapped out with this one, I'll repost my late comment here, since it applies to a lot of companies.)
> As one internal report put it: [...damning effects...]
I recall hearing of related embarrassing internal reports from Facebook.
And, earlier, the internal reports from big tobacco and big oil, showing they knew the harms, but chose to publicly lie instead, for greater profit.
My question is... Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?
* Do they not realize when management doesn't want to hear this?
* Does management actually want to hear it, but with overwhelming intent bias? (For example, hearing that it's "compulsive" is good, and the itemized effects of that are only interpreted as emphasizing how valuable a property they own?)
* Do they think the information will be acted upon constructively, non-evil?
* Are they simply trying to be honest researchers, knowing they might get fired or career stalled?
* Is it job security, to make themselves harder to fire?
* Are they setting up CYA paper trail for themselves, for if the scandal becomes public?
* Are they helping their immediate manager to set up CYA paper trails?
Interesting. Any sense whether that system was consciously constructed? (Like, Task a group to generate product changes appealing to users, and then cherrypick the ones that are profitable, to get/maintain profitable good product.)
Or was it not as conscious, more an accident of following industry conventions for corporate roles, and corporate inefficiency&miscommunication?
It was extremely scientifically methodical. Everything is designed from UX and other sources of holistic research. Then validated with the most built-out AB test system you can imagine. Only winners are kept.
Meta is doing this thousands of
times per month, all the time.
> Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?
They hire people on the autism spectrum who are inclined to say things out loud without much regard/respect for whether they are "supposed to" say it. *cough* James Damore.
I didn't guess that autism was involved in that case, and I'm a little uncomfortable with something that might sound like suggesting that autistic people might be less corporate-compatible.
There are plenty of autistic people who wouldn't say what Damore did, and there are non-autistic people who would.
I also know autistic people who are very highly-valued in key roles, including technical expert roles interfacing directly with customer senior execs in high-profile enterprise deals.
People are individuals, and we tend to end up treating individuals unfairly because of labels and biases, so we should try to correct for that when we can.
On the contrary, autistic people who don't hesitate to speak uncomfortable truthes are vital to the health of organizations, and society as a whole. You would all be lost without us.
(Note my indifference to your discomfort with my comment.)
In my opinion it's unhelpful to pathologize behaviour like being blunt or speaking your mind. It's just another expression of the impulse to split the world into an in-group and an out-group.
I especially agree with the last paragraph of the GP. Doing this may be fun to do when you're making statements like "autistic people are inherently superior in some ways", but it's obviously an issue when some other misguided person makes a statement that I think most rational people would disagree with, such as "autistic people are inherently inferior in some ways". We are all just people.
I do want to note a tangential topic on social media harming children and young adults.
In my personal experience, kids and young adults particularly those who grew up immersed in social media (born after ~1995–2000), seem to struggle with recognizing appropriate, undistorted social cues and understanding the real-world consequences of their actions.
To Snapchat harming kids, I think it is more than just evil people doing "five key clusters of harms".
Even adults often expect the same instant reactions and flexible social dynamics found online, which blinds them to the more permanent, harsher outcomes that exist outside of digital spaces.
Anecdotally, the utter shock that shows on some people's face when they realize this is sad, and very disconcerting. (At an extreme think "pranksters", that get shot or punched in the face, and they are confused why that happened, when "everyone loves it online".)
How to fix this? the suggested solutions will not solve this problem, as it does not fit the "clusters of harms".
The social media business model is predicated on scaling up obvious and huge conflicts of interest. To scales unfathomable a couple decades ago.
Basic ethics, and more importantly the law, need to catch up.
Surveilling, analyzing, then manipulating people psychologically to mine them for advertisers is just as real a poison as fentanyl.
And when it scales, that mean billions of dollars in revenue, actual trillions of dollars in market value unrelentingly demanding growth, playing whack-a-mole with the devastating consequences isn’t going to work.
Conflicts of interest are illegal in many forms. Business models incorporating highly scalable conflicts of interest need to be illegal.
We could still have social media in healthier forms. They wouldn’t be “monetizing” viewers, they would be serving customers.
Facebooks army of servers isn’t required to run a shared scrapbook. All those servers, and most of Facebook’s algorithms and now AI, are there to manipulate people to the maximum extent possible.
This all seems like obvious byproducts of an ephemeral photo based platform. Beyond these, there's also the shitty "explore" feature that pushes sexually explicit content that can't be disabled. Surprised that's not mentioned here.
People knew smoking killed for decades. Do you think that with no policy change and no regulation, that Marlboro and Philip Morris would have let their market tank?
Advertising - banned, smoking indoors - banned, and most importantly, taxing the hell out of them (every 10% increase in cigarette prices results in a 4% decrease in adult consumption and a 7% decrease in youth consumption).
There isn't really directly comparable policy to taxing these free social media platforms., however, and the whole thing is a bit stickier. Before any policies can stick, the public needs to be aware of the issues. That is tough when most people's 'awareness of issues' comes directly from social media.
I think part of it is that social media has now been around long enough that it is becoming possible to study the long term effects on our monkey brains from being constantly exposed to the lives and opinions of millions of strangers on a global level.
for sure. but if ANY of that kind of thing gets in the way of profits, well then that's not OK. in capitalism, profit is the only thing that matters. CSAM? drugs? underage use? pfft.
until this country gets serious about this stuff - and don't hold your breath on that - this is the absolute acceptable norm.
I don't work for Snap, but they do use some software I wrote, so I guess that's close enough.
I find all of these "social media is bad" articles (for kids or adults) basically boil down to: Let humans communicate freely, some of them will do bad things.
This presents a choice: Monitor everyone Orwell-style, or accept that the medium isn't going to be able to solve the problem. Even though we tolerate a lot more monitoring for kids than adults, I'm still pretty uncomfortable with the idea that technology platforms should be policing everyone's messages.
So I sleep just fine knowing that some kids (and adults) are going to have bad experiences. I send my kid to the playground knowing he could be hurt. I take him skiing. He just got his first motorcycle. We should not strive for a risk-free world, and I think efforts to make it risk-free are toxic.
Pouring the resources of a company the size of Snap into addicting as many kids into their app as deeply as possible is not the same letting them communicate freely. Besides that, I don't know of any parent that would want ephemeral and private communication between their child and a predatory adult. Snap is also doing nothing to shield them from pedophiles, drug dealers, and arms dealers that are using the same app as a marketplace.
The damning part is that these companies know they harm they are doing, and choose to lean into to it for more $$$.
Thanks for your response. Your open source contributions are perhaps less damned than those of an actual Snap employee ;)
Are you not willing to even entertain the notion that communication platforms could influence the way that it's users communicate with each other? That totally ephemeral and private image based social media could promote a different type of communication compared to something like say, HN, which is public and text based? Sure you take your kid skiing, but presumably you make them wear a helmet and have them start off on the bunny hill, I agree that a risk-free world is an insane demand that justifies infinite authoritarian power but there is a line for everyone.
Yes, I make my kid wear a helmet. I make sure his bindings are set properly. I make sure he's dressed warmly. I make sure he's fed and hydrated.
I am the parent. The ski resort provides the mountain, the snow, and the lifts.
He's a bit too young to be interested in taking pictures of his wang but I'd like to think this is a topic I can handle. Teaching him to navigate a dangerous world is sort of my job. I'm not losing sleep over it.
> Let humans communicate freely, some of them will do bad things.
That’s just normal phone calls - no one is complaining about those.
But social networks have algorithms that promote one kind of content over another.
I keep getting recommended YouTube videos of gross and mostly fake pimple removal, on Facebook AI generated fake videos of random crap like Barnacle removal, and google ads for an automated IoT chicken coop.
I have never searched for these things and no living person has ever suggested such things to me. The algorithm lives its own life and none of it is good.
You have a very different experience than I do! My Youtube algorithm suggestions are wonderful, full of science and engineering and history and food and travel and comedy and all kinds of weird esoteric things that would never have been viable in the broadcast TV I grew up with. I am literally delighted.
Maybe you're starving the algorithm and it's trying random things? Look up how to reset the YT algo, I'm sure it's possible. Then try subscribing/liking a few things that you actually like.
If you're within a standard deviation or two of the typical HNer, look up "Practical Engineering" and like a few of his videos. That should get you started.
I thought you had changed the subject to Youtube? Snap is person to person communication, Youtube is broadcast to the public. I don't think Youtube knows who my friends are. I wouldn't call it social media; it's just media.
It makes no sense to group these things together; "youtube leads to sexploitation" is nonsense. What I think I'm hearing is ennui about technology in general, which I can understand, but keep your arguments straight.
Exactly. It's marginal benefit vs marginal harm. Teens can "communicate freely" over text, voice, and video calls, including sending each other photos... TO THEIR CONTACTS.
There is no need for location based recommendations, streaks, nudges, etc. They should be building their social networks in the real world. And if they need friends outside of school, that can come through parentally facilitated activities like sports, clubs, etc. Later you start playing Magic the Gathering at the nerd shop or go to "shows" at the VFW hall.
I’ve worked there, maybe my 2 cents: at the end of the day I have mouths to feed and honestly I used to be idealistic regarding employer moral compass and so on, but coming from the bottom in socio-economic terms I will exercise my right to be cynical about it.
I have some support to the Trust&Safety team at the same period of the whole debate about the section 230; and from what I can tell Snap has some flagging mechanisms quite good related with people selling firearms, drugs and especially puberty blockers.
The thing that I can say is that a lot of parents are sleeping at the wheel with teenagers and not following what is going on with their child.
This generation is failing at recognizing the dangers of social media.
Teenagers and even children are being radicalized on-line, sold dangerous diets, manipulated by state sponsored creators, lied by companies, taught anti-science, and the list goes on and on.
How is all this not heavily regulated? Even adults need protection from scammers, fake products, misleading ads, hidden product promotions that look like personal opinions...
We have gone back a 100 years when it comes to consumer rights, and children are the ones that are paying the highest price.
You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time. Or at least saw that in your friends.
This is a society wide thing. Parents are pretty much powerless.
So yes, regulation. But you'll see how discussion of any proposal for this goes down in this forum. Just imagine across the whole polis.
Genuinely asking - is it impossible to just enforce a no phones until 16+ rule with your kids? The reasons against it I see are either “it’s too hard for the parents” or hypothetical (“they would have no social life”). There were tonnes of things I wanted to do as a teenager that my parents prevented me from doing. Including things my friends were allowed to do by their less strict parents. There was of course things I did despite them but phones seem like a simple one for parents to control given teenagers can’t afford them otherwise until they start working at 16+. Allowing instant messaging via a computer seems like a nice middle ground.
I would have strongly agreed with you if we were talking ten years ago but with everything using two-factor authentication these days it pretty much a requirement to have a phone. Even for children to do school work.
Like there are parental control systems and all that you could set up but that requires you to be pretty tech savy as a parent. I think you are already doing great if you keep your child away from phones and tablets until they are of school age but keeping teenagers away from smart phones seems very unrealistic if you don't live in a remote commune or something.
Because we're up against trillion dollar companies that employ armies of experts with the goal of inducing addictive behavior. We're deeply outgunned.
Because kids have a genuine need for socialization, and being the one without a phone means you just don't get invited to shit. Birthday parties, hangouts, random trips to the ice cream shop.
Because kids are smart. I'm very technical - I had a pfSense firewall, Pihole, and Apple's screen time on my kids' devices. They found ways around that within hours; kids at school swap VPN/proxy instructions and whatnot.
Because kids these days get a school laptop, on which I have zero admin rights.
Because I don't want to be a jail warden, I want to be a parent.
Yes, I understand all of that. What I meant was: refusing smartphones as long as possible. For example, as long as only ~50% of your kid's friends have a smartphone, it should be possible to still resist. Just don't be one of those parents who (unknowingly) help create the problem in the first place by succumbing to Big Tech on the first occasion.
Last week, a 15-year-old girl named Dorothy looked at the smart fridge in her kitchen and decided to try and talk to it: "I do not know if this is going to tweet I am talking to my fridge what the heck my Mom confiscated all of my electronics again." Sure enough, it worked. The message Dorothy said out loud to her fridge was tweeted out by her Twitter account.
(And before that, she used her DS, her Wii, and a cousin's old iPod. There's always a friend's house, too.)
Confiscate the hell out of it. That's what parenting is for. How much money is a kid going to spend on burner phones before deciding to just stop bringing them to the house?
I don't really understand what you're arguing for here. Obviously prisons understand they can't catch everything, but they try anyway because it's still better than letting prisoners bring in whatever they want.
They try, and they fail comprehensively, and that's despite being very willing to do things that would be extremely clear child abuse if I tried them on my kids.
The prison warden doesn't care if the prisoners love him 20 years from now.
> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.
Actually, I remember the opposite. I had problems with screen time so my parents put a password on the computer. It wasn't 100% effective, of course, but it was closer to 90% than 0%.
> You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time.
There might be bias here if one remembers one's own teenage years, because I'm sure many teenagers _think_ their parents don't have influence over them. If you ask the parents though I'm sure many would agree aren't fully in control, but do notice they have a lot of influence still.
Personally, the older I grow, the more I realize how much influence in general my parents actually had over me.
I want to add it is important to show that you are against those things as well, too many people react by shifting blame when they stand to gain more by saying, "Yeah, I don't like that either."
Phone use during class time is banned in my kid's high schools.
Makes no difference -- it's completely unenforced by the teachers. They're practically physically adults, teachers don't want to risk the confrontation, etc. And the kids suffer for it.
And my youngest uses no social media but their mind is still eaten by constant phone usage.
More than social media, the problem is the device. The form factor.
Phones are banned on school ground here and its working. My kids have never been allowed social media here at home, and they don't see friends doing it because phones are not allowed at school at all.
Neither give a shit about their phone and we have to force them to take it if they are going out so we can call them if we need them.
It isn’t properly regulated because the CEO’s and founders just moan that it isn’t possible to regulate so much user generated content. I’m of the opinion that, in that case, their sites shouldn’t exist but people seem to have convinced themselves that Facebook et al provide too much value to stand up to.
Since that article is several months old and this one is new, we swapped it out. I assume it makes more sense to discuss the new one. Also, there were lots of criticisms of the other article for supposedly focusing only on TikTok, and those criticisms seem supplanted by this piece. (I'm not arguing whether it's right or wrong, nor have I read it.)
Was it? In Facebook’s early days you actually followed your friends and only saw their content. There wasn’t even an algorithm until a few years in when they stopped showing the feed chronologically. It wasn’t perfect but it was largely just an extension of your IRL social life.
Getting into the limits of my memory here, but as far as I recall, early Facebook didn't have a feed at all, chronological or otherwise. It was just a directory of students at your own school, skeuomorphic to the physical "facebook" that universities would hand out each semester to students on campus, which gave you a headshot of everyone along with their room numbers. At some point, they added an updateable "status" field to the profiles, to tell your friends how you were feeling that day or what you were doing or whatever. When they started showing those on the home page instead of just on the profiles, then there was a feed, which eventually transformed into the monster we see today.
But early on, it was just a digital phonebook with headshots and exactly equivalent to physical items that schools already distributed.
Would generally disagree here. Especially when limited to edu emails, it was focused on human connections. Even after it opened to broader audience, it was centered on explicit connections you already had (or to some limited extent discovering new ones through network effects).
Now whether social networks in even these basic forms are harmful (discouraging physical connections, isolation in digital environments, etc), is maybe a different topic.
Exposure to echo chambers of harmful, hateful content driven by algorithms seems to be more the focus here. MySpace, early FB, or even AIM/ICQ, and others focused on facilitating connections and communication didn’t drive the same level of harm imo.
Following the format of our previous post about the “industrial scale harms” attributed to TikTok, this piece presents dozens of quotations from internal reports, studies, memos, conversations, and public statements in which Snap executives, employees, and consultants acknowledge and discuss the harms that Snapchat causes to many minors who use their platform.
Young people have more time ahead of them than anyone. Consequently, in my opinion, young people should be receiving information with a long time period of usefulness. Smartphone notifications have a very short half-life.
Many in the comments were criticizing Black Mirror for being unrealistic. Especially in Black Mirror’s assumption that negative technologies would be introduced into society and ruin people without folks realizing.
Well…Snapchat is basically a Black Mirror story. It was introduced and became widespread without much debate. The negative effects are happening. We know of them. Nothing happens. So the Black Mirror criticizers were wrong.
“You best start believing in Black Mirror stories Mrs Turner. You’re in one!”
And so are the rest of us. Look around you and tell me the world isn’t a Black Mirror episode.
I take the opposite viewpoint as the criticisers -- they're too real, too foreseeable, that I would almost ask the Black Mirror writers not to give "them" any more ideas.
The question is whether you want Black Mirror producers or SciFi authors to continue generating art and entertainment. Those have value to people with literary comprehension, but they might also be misinterpreted by people who believe them to be a roadmap. My fear is that by shifting the medium from novel to TV show, you're removing the slight filter that keeps out those with insufficient literacy to sit down with an interesting 400-page paperback and opening it to those who can press "Play".
Yes. A large fraction of Snapchat's users are significantly harmed.
First hand, I see it all the time in students. There's an extreme unhealthy obsession with social media that leads to serious inferiority complexes and depression. All of this wrapped in algorithms that compel people to participate in various ways, from streaks to points, etc.
Quantitatively, everything from anxiety to depression to suicide has more than doubled in teens.
Oh heck, forget about teens. I see it in plenty of adult groups, like mothers. There's a major pressure from others to keep up, serious self-doubt for normal setbacks, unrealistic expectations around even mundane things.
Social media is black mirror, and we're doing it to ourselves.
> Social media is black mirror, and we're doing it to ourselves.
You mean black mirror is a pessimistic exaggeration on the state of society and technology. It’s not the other way around. What you’re observing is not profound, it’s literally how the writers approach their process for the show.
In fact, you’re doing this weird thing where you make it seem like black mirror was prophetic and it came before all the observations about tech and society, when it was clearly the other way around.
The criticism from the thread you’re referencing is that their approach is too on the nose and the villains are cartoonish. There’s no subtlety or even anything interesting anymore in the latest seasons. A critique on software subscriptions? We’ve been doing that since it was invented.
Those are fair criticisms.
What’s missing from black mirror, this article, and your perspective is how much social media has benefited everybody. How many jobs has it created? How many brand new careers and small businesses exist only because of social media? It’s an entire economy at this point. The good and bad effects of democratization of information dissemination.
There’s hardly an interesting analysis or critique of the actual current state of tech & society because you’re out here looking for the bad and ignoring the good. Much like black mirror is doing. Its main goal is to be as shocking as possible. That’s why in the thronglets episode, which I did enjoy, there was so much pointless gore. Yes, the point was that the throng had to see what humans are capable of, but there’s no reason to show all the gore associated with drilling through your head or dismembering a dead body. All of that is bottom of the barrel shock value stuff, which is ultimately what black mirror has devolved into.
Children committing suicide at twice the rate is bad. Childhood depression at twice the rate is bad. Declining scores on every metric of well-being and attainment is bad.
I'm ignoring the good?!
No. When kids that I know self harm at alarming rates because of social media, I'm not ignoring the good.
You're prioritizing some abstract nonsense over the actual people who are suffering.
I’m not defending social media. I’m talking about how there’s no nuance in ops perspective, black mirror, or the article. It only highlights the negatives and that’s all there is. Basically nobody is looking at the positives. If you’re going to do a societal harm analysis you should probably consider the benefits too before coming to a conclusion.
It’s a logical fallacy. If we are simply thinking about whether any of society is harmed we might as well just do nothing at all and cease to exist. Nobody in this thread is willing to engage and sincerely discuss the benefits vs the harms.
You're welcome to present the benefits. I can only speak for myself though: they don't amount to a thing worth otherwise poisoning society for.
Other than people that already agree with you, I'm not sure who you are appealing to by suggesting others are caught up in "think of the children".
I bring children up because studies seem to focus on the negatives of social media on children in particular. Also I raised three children and watched social media play out in their lives.
Yeah, it would be fair more valuable to have the villains everyday folks like you and me just trying to make a buck, too busy or selfish to see the implications of of the software they make.
Most technologies in Black Mirror are fully implemented as-is, usually with clear and prescient knowledge of the downsides known and suppressed by the owner of the technology.
Snapchat is not that. It started out as an innocent messaging app and slowly mutated into the monster it is after it was already widely adopted.
The criticism of Black Mirror is that it's presented as immediate widespread adoption of the new Torment Nexus 5000, which was always intended to be a force of evil and suffering. Everyone knows exactly what the torment nexus is and willingly accepts it. Snapchat only became a torment nexus after it was established and adopted, and was done this way maliciously.
Did some work with researchers at a local university and found out that Snapchat is like the #1 vector for production and distribution of CSAM. Same thing when it came to online grooming.
> We suggested to them some design changes that we believe would make the platform less addictive and less harmful: [...] 5. Stop deleting posts on Snap’s own servers.
Can someone say the original intent or de-facto use case of Snapchat, and how that's changed over time?
Around the time it started, I heard that it was for adult sexting, with people thinking they could use it to send private selfies that quickly self-destruct. So that (purportedly) the photos can't be retained or spread out of the real-time person-to-person context in which they were shared. (I guess the ghost logo was for "ephemeral".)
And then I vaguely recall hearing that Snapchat changed the feature, or got rid of it.
Sorry to hijack this thread with a completely off-topic issue, but I have no idea where else to reach about this.
I did a submission yesterday showcasing the work of some of my colleagues at UofT, it's satire but it is backed by serious academical work.
I was very sad to see it quickly got flagged and removed from the front page when it started to generate discussion.
I just wanted to ask you to unflag it or provide an exlaination as to why it should remain flagged and is breaking the guidelines, as I believe censoring/muting academics on important topics such as AI in the current political climate is yet another dangerous step towards fascism.
For I second I thought this employee was talking about what's healthy for the user. Certainly not though; they mean what's healthy for the "user-base". I find very interesting how this sort of language leads to certain employee behaviour. Using the concept of "health" to mean retention and engagement, might overcast thinking about health from a user's perspective— it's similar terminology but very different, and sometimes even opposite, goals.
reply