I was not particularly a fan of them - the plot seemed to find overly easy solutions to all the actual messiness that comes when dealing with others very unlike yourself, which given the rest of the stories, feels like it undercuts the entire point of them.
The Tchaikovsky novella I really like is Elder Race. Technology-as-magic is done in so many places (Ventus is another favourite), and I usually enjoy it, but I felt that in Elder Race it was pulled off in an unusually elegant way.
As you say 'social media' is not a good category, we should specify exactly the things that are concerning. Here are the ones where I'm concerned about their effect on young people:
1. a user is shown new content based on extensive profiling and a secret algorithm that the user does not control
2. a users activity can be discovered and tracked by people that intend to take advantage of the user
3. the operation of the site is optimised for addiction (or more euphemistically "attention")
I absolutely don't think that a book club or a kids own website comments or person to person chat systems should be included in the rules.
Note - I'm not saying these things should be banned, just that I think it's reasonable to restrict their use to adults.
…why do all of those things happen? to sell paid digital advertisement. remove that incentive and I suspect the “social media” problems largely go away
In reality, a large enough group of people on the internet starts to turn sour. Especially with anonymity. Especially without a specific purpose like a book club. Especially without moderation.
Small groups where you know everyone is where it’s at. To avoid internet stalkers and bullies, and for general quality of the community。
Our brains are built for small communities, not billions.
I suspect the reasoning was similar to the reason Tesla bought Solar City or X.ai acquired the site previously known as twitter. Pure unvarnished investor value.
The premise of the steps you've listed is flawed in two ways.
This is more what agentic-assisted dev looks like:
1. Get a feature request / bug
2. Enrich the request / bug description with additional details
3. Send AI agents to handle request
4a. In some situations, manually QA results, possibly return to 2.
4b. Otherwise, agents will babysit the code through merge.
The second is that the above steps are performed in parallel across X worktrees. So, the stats are based on the above steps proceeding a handful of times per hour--in some cases completely unassisted.
---
With enough automation, the engineer is only dealing with steps 2 and 4a. You get notified when you are needed, so your attention can focus on finding the next todo or enriching a current todo as per step 2.
---
Babysitting the code through merge means it handles review comments and CI failures automatically.
---
I find communication / consensus with stakeholders, and retooling take the most time.
One can think of a lot of obvious improvements to a MVP product that don't requre much regarding "get a feature request/bug - understand the problem - think on a solution".
You know the features you'd like to have in advance, or changes you want to make you can see as you build it.
And a lot of the "deliver the solution - test - submit to code review, including sufficient explanation" can be handled by AI.
I'd love to see Claude Code remove more lines than it added TBH.
There's a ton of cruft in code that humans are less inclined to remove because it just works, but imagine having LLM doing the clean up work instead of the generation work.
I think the cycle is because people forget how destructive war is for all sides, how much human wealth is thrown away in order to achieve enormous human misery. If it's happened in recent memory, people are reluctant to let those who think they might benefit from it to pursue it. The more time that passes, the easier it is to distract people from the misery and the easier it is to persuade people that it's justified.
While you're not wrong that JS has come a long way in that time, it's not the case that it was an extremely unusual choice at the time - Ryan Dahl chose it for node in 2009.
I think a better approach might be to require that any algorithm used to suggest content to users must be made open source so that people whose world views are being shaped by the content you're feeding them can analyse how you're deciding what to show them.
I feel like there's definitely a problem here with social media and its effect on society, but our first approach should be to increase transparency and accountability, rather than to start banning things by force of law.
The worst show I've seen for this was american - mythbusters.
reply