Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading between the lines I think a key part of what makes chatbots attractive, re lack of judgment, is they're like talking to a new stranger every session.

In both IRL and online discussions sometimes a stranger is the perfect person to talk to about certain things as they have no history with you. In ideal conditions for this they have no greater context about who you are and what you've done which is a very freeing thing (can also be taken advantage of in bad faith).

Online and now LLMs add an extra freeing element, assuming anonymity: they have no prejudices about your appearance/age/abilities either.

Sometimes it's hard to talk about certain things when one feels that judgment is likely from another party. In that sense chatbots are being used as perfect strangers.



Agreed/that’s a good take.

Again, I think they have utility as a “perfect stranger” as you put it (if it stays anonymous), or “validation machine” (depending on the sycophancy level), or “rubber duck”.

I just think it’s irresponsible to pretend these are doing the same thing skilled therapists are doing, just like I think it’s irresponsible to treat all therapists as equivalent. If you pretend they’re equivalent you’re basically flooding the market with a billion free therapists that are bad at their job, which will inevitably reduce the supply of good therapists that never enter the field due to oversaturation.


Also important is simply that the AI is not human.

We all know that however "non-judgmental" another human claims to be, they are having all kinds of private reactions and thoughts that they aren't sharing. And we can't turn off the circuits that want approval and status from other humans (even strangers), so it's basically impossible not to mask and filter to some extent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: