Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.

We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.

If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.

LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.



> Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]

[0] https://www.libraryofshortstories.com/onlinereader/the-veldt


The moral of this story is that if you install a really good TV the animals will come out of it and eat you? Is the author a dog?


I suggest taking a literature course and learning how to interpret narratives.

The Veldt is a classic short story written in 1950 by Ray Bradbury, a famous and celebrated author, who also wrote the famous dystopian novel Fahrenheit 451.


Given that only about 10-40% of advanced readers (depending on subpopulation criteria and task [0]) can parse analogy and metaphor, parent is the majority rather than the minority.

Modern day statistics on what used to be basic reading comprehension are bleak.

[0] https://kittenbeloved.substack.com/p/college-english-majors-...


Ironically, Bradbury likes to tell people that Fahrenheit 451 isn't about the thing it was obviously supposed to be about (censorship) because he now wants it to have been a metaphor for cancel culture.


he's been dead for a decade so I doubt he now wants the meaning to be anything. besides that he also never said anything about cancel culture he said it's about how tv turns you into a moron.

https://www.openculture.com/2017/08/ray-bradbury-reveals-the...


> In a 1994 interview, Bradbury stated that Fahrenheit 451 was more relevant during this time than in any other, stating that, "it works even better because we have political correctness now. Political correctness is the real enemy these days. The black groups want to control our thinking and you can't say certain things. The homosexual groups don't want you to criticize them. It's thought control and freedom of speech control."

They had cancel culture in the 90s too.


Cancel culture is vigilante political correctness.

Next comes legalized, then deputized, then militarized...


> you can’t say certain things

So he sees it as another form of censorship


Cancel culture is censorship?


One of those involves fulltime professionals backed by state violence and the other is when people on social media are mad at you.


> when people on social media are mad at you

It's about more than that - many people have lost their jobs, been de-banked, or even been arrested (especially in countries like the UK and Germany) for expressing their opinion publicly when that opinion was merely (a) what most people in their country believed in the recent past (< 50 years ago), and (b) a politically incorrect opinion.


Isn't the only difference whether the censors are in or out of government power?

Few now respect the wisdom of 'should not' even when 'can'


That difference is so big it makes it an entirely different thing.


It doesn't have to be that way of course. You could envision an LLM whose "paperclip" is coaching you to become a great "xyz". Record every minute of your day, including your conversations. Feed it to the LLM. It gives feedback on what you did wrong, refuses to be your social outlet, and demands you demonstrate learning in the next day before it rewards with more attention.

Basically, a fanatically devoted life coach that doesn't want to be your friend.

The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.


If that were truly the LLM's "paperclip", then how far would it be willing to go? Would it engage in cyber-crime to surreptitiously smooth your path? Would it steal? Would it be willing to hurt other people?

What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?

"The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".


> If that were truly the LLM's "paperclip", then how far would it be willing to go?

While I'm assuming you didn't mean it literally, language is important, so let's remember that an LLM does not have any will of its own. It's a predictive engine that we can be certain doesn't have free will (which of course is still up for debate about humans). I only focus on that because folks easily make the jump to "the computer is to blame, not me or the folks who programmed it, and certainly it wasn't just statistics" when it comes to LLMs.


That sounds like a very optimistic/naive view on what LLMs and "the market" can achieve. First, the models are limited in their skills: they're as wide as a sea, and as shallow as a puddle. There's no way it can coach you to whatever goal (aside: who picks that goal? Is it a good goal to begin with?) since there's no training data for that. The model will just rehash something that vaguely looks like a response to your data, and after a while will end up in a steady state, unless you push it out of there.

Second, "the market" has never shown any tendency towards rewarding such a thing. The LLMs' development is driven by bonuses and stock prices, which is driven by how well the company can project FOMO and get people addicted to their products. This may well be a local optimum, but it will stay there, because the path towards your goal (which may not be a global optimum either) goes through loss, and that is very much against the culture of VCs and C suite.


The only issue I'd have with this is that you'd be very overweight on one signal; that has a lot of data and context to give compelling advice of any degree of truthfulness or accuracy. If you reflect on your own life and all the advice you've received, I'm sure lots of it will be of varying quality and usefulness. An LLM may give average/above-average advice, but I think there is value in not being deeply tethered to tech like this.

In a similar vein of thought to "If you meet the Buddha on the road, kill him" sometimes we just need to be our own life coach and do our best to steer our own ships.


> It doesn't have to be that way of course.

It sorta does, in our society. In theory yes, it could be whatever we want to make of it, but the reality is it will predominantly become whatever is most profitable regardless of the social effects.


> Basically, a fanatically devoted life coach that doesn't want to be your friend.

So, what used to be called parenting?


If that is/was parenting, I am completely envious of everyone that had such parents. I don't even want to think about the "parenting" I and my siblings received because it'll just make me sad.


I’m highly doubtful that that aligns with the goals of OpenAI. It’s a great idea. Maybe Anthropic will make it. Or maybe Google. But it just seems like the exact opposite of what OpenAI’s goals are.


Have you tried building this with prepromts? That would be interesting!


With the way LLMs are affecting paranoid people by agreeing with their paranoia it feels like we've created schizophrenia as a service.


> In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others.

I get what you're saying here, but all of these mechanisms exist already. Many people are already desperate for attention in a world where they won't get any. Many of them already turn to the internet and invest an outsized portion of their trust with people they don't know.


In a decade? You mean today? Look at ultra left liberals and ultra right republicans. They live in different universes. We don’t even need to go far, here we have 0.001% of tech savvy population that lives in its own bubble. Algorithms just help to accelerate the division.


People are still mostly talking about the real world tho. I was thinking that in the future, you ask a kid what happened today and they would start talking about the war between eurasia and eastasia (alongside "pictures" as proof of that). Just imagine a world where people would unironically say "whats searching/googling the web?"


Imagine the social good we could create if we instead siphoned off the energy of both those groups into the LLM-equivalent of /dev/null!

'Sure, spend all of your time trying impress the totally-not-an-LLM...'

(Aka Fox News et al. comment sections)


Unfortunately, we gave both of those groups the right to vote.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: