I feel like this is one of the most obvious takes out there but simultaneously parents are crazily unaware of it. Parents need to be more aware of ai companions and Roblox and how bad they are.
I don't know what Minecraft is like these days now that microsoft has their hands in it, but when I played it Minecraft didn't have online gambling, microtransactions, sexual predators, child labor/exploitation, advertisements, brand ambassadors and celebrities manipulating kids, extremist propaganda, or any of the other harmful things roblox targets children with and exposes them to.
If minecraft is just like roblox now then sure, I'd be glad to back a bill to regulate that out of existence too.
I want to know if anyone has answered the question: what does a healthy relationship with this thing look like?
All kids grow up and are eventually exposed to sex drugs and rock'n roll. These things are part of our world, you have to coexist with them. The problem with video games, social media, AI, and all things tech is they're so new and evolving so fast that no one really knows what a healthy relationship looks like. Though awareness is growing and we've started asking questions like: how much screen time is ok? At what age do I allow my kid to make a social media account? Should we be using our phones last thing before bed and first thing in the morning? Not to mention more wide spread issues of privacy and exposure to content ranging from amusing to abusive. AI as a "convincing BS artist" you can engage with endlessly is something I struggle to wrap my head around. My personal policy is to keep AI on a short leash; use it sparsely, don't overly rely on it, and always question its assertions. But allowing unrestricted access to a powerful tool that requires self control and good judgment is inviting disaster. Banning it for kids makes sense, but what about everyone else?
Can someone link the actual products they're talking about? ChatGPT isn't exactly great at forming emotional bonds, but I could see some other app doing this.
> We conducted extensive research on social AI companions as a category, and specifically evaluated popular social AI companion products including Character.AI, Nomi, Replika, and others, testing their potential harm across multiple categories.
You’d be surprised. I think it varies with the update, but I’ve often suspected that it has been optimized for role-play at some level. As OAI looks for mass-market fit I expect this to continue.
I can see educational AI companions as workable in narrow contexts, eg a model fine-tuned on Paul Erdos and Martin Gardner and similar could be great for helping students work through math problem sets.
You'd probably want it to reject questions on religion and politics and human relationships to avoid the furious parental outrage, though. Narrow, well-defined contexts only. Even so some kids would come up with jailbreak strategies.
Honestly, I think they pose a far bigger risk to some adults. Adult's have a harder time making friends, in changing themselves when stuck in a loop, and loneliness is growing exponentially in big cities.
Of course children are children, and adults are responsible for their own choices.
Btw, I like generative AI and LLMs, I'm not trying to say "ban it" or "regulate it", just pointing out that lonely adults are a very real thing, and some of them can and will get stuck in this, the same way they can and do get stuck in other online hobbies.
Can't say I see this trend declining any time soon. People seem to find affirmation (and in some sense validation) interacting with these LLM's. Provided the AI in question is well-aligned that shouldn't be much of a concern. Not much different from talking to a friend/therapist for emotional support, is it?
I mean let's be real, they pose unacceptable risks to everyone. But in the west we only have strong societal norms around protecting children from themselves.
Exactly. The posted link is unreadable, but other stories on the topic (for example: https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-un...) don't give me any reason to think that they are safe for adults. Adults are just slightly/somewhat better able to handle the hazardous material.
Do chatbots that pretend to be anime main characters really pose risks to anyone? Really? You all know that there are real things in the world like toxic waste dumps and human trafficking right?
They pose risks to people in the same way porn, gambling or drugs pose a risk to people. We should as a society generally err on the side of being permissive with this stuff while providing the tools necessary for people to be safe.
"Social AI companions are the next frontier in EduTech, and should be welcomed with hope and optimism," said the VC. "Our mission is to change the world for the better, and anyone in our way is evil Luddite trying to hurt you."
Lonely young people are supposed to suffer for their sin of being unacceptable to the masses. We shouldn't let future incel types be allowed to find a social outlet because it short-circuits the "nudge" that makes them "self improve".
This is basically what the anti-AI as social companion crowd believes.
Actually, nerdy or autistic basement dwelling people are not "bad" and often do not deserve the social scorn they get. It's good that we can short-circuit the "need" for social interaction, especially with these kind of companions.
All this pearl clutching because one kid doing NSFW chats with a Danerys Targayn chatbot on character.ai got some media attention after committing suicide.
> Lonely young people are supposed to suffer for their sin of being unacceptable to the masses.
The solution to that problem is not push fake e-friends on them. That's like "Feeling sad? Try heroin, it will make you feel good!"
And honestly, your take sounds like the kind of pseudo-empathetic sales pitch of someone trying to push a technology, and block things that could stand in the way of making money from its adoption.
One can believe that all people are deserving of love and friendship regardless of who they are or what they've done, and simultaneously believe that replacing social interaction with AI is generally a net harm for any/everyone. No one is bad because they want social stimulation from an AI, but I think it reinforces damaging norms that will leave us all worse off.
I've wondered why, and what societal shifts in the world led us here. Maybe the "violent video games cause harm" narrative didn't work out in practice because the types of console video games that tend to become popular don't have the harmful engagement elements being talked about now? Propaganda as a concept has existed far longer than video games. But mere depictions of violence don't incite behavioral change in the way that "social optimization" elements do?
(Example: treating lootbox items not as bits of fictional lore the player's hero character finds in mythical dungeons, but as a set of items the person playing has invested their real-world shillings into according to predefined economic rules set by the designers, such that their livelihood becomes enmeshed with the game world)