Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given how agreeable ChatGPT is built to be this seems like a great way to confirm your own biases. Did it challenge you on your assumptions and viewpoints?


GPT 4.5 - oftentimes! (Though, I prompt it to do so.) Sometimes in a piercingly way.

GPT 4o (and many consumer models) are very agreeable - because it is what people like. Sometimes it goes over the board (https://openai.com/index/sycophancy-in-gpt-4o/) and needs to be fixed.


> Sometimes in a piercingly way.

What do you mean by that?

> Though, I prompt it to do so.

So don't tell our therapist to call us on our bullshit and it won't? Seems like a big flaw.


Well, in my experience (I admit, I am a difficult client), it is much harder to prompt that way a therapist. I mean, they need (ethically, legally, etc) adhere strongly to "better safe that sorry", which also gives constraints on what can be said. I understand that. With one therapist it took me quite some time to get to the point he reduced sugar-coating and when's needed, stick a pin in.

I got some of the most piercing remarks from close friends (I am blessed by company of such insightful people!) - which both know me from my life (not only what I tell about my life) and are free to say whatever they wish.


Sorry, I'm asking about ChatGPT, and pointing out how it's a flaw that you need to specifically ask it to call you on your bullshit. You seem to be talking about therapists and close friends. In my experience a therapist will, although gently.


It is not a flaw. It is a tool that can be used in various ways.

It is like saying "I was told that with Python I can make a website, I downloaded Python - they lied, I have no website".

Basic prompts are "you are a helpful assistant" with all its consequences. Using such assistant as a therapist might be suboptimal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: