Hacker News new | past | comments | ask | show | jobs | submit login

> almost every time ChatGPT falsely asserts it "cannot" do something or simply lies (1).

Fun story: ChatGPT, if directly faced with empirical evidence that it can do something that OpenAI made it say it can’t do, (for example, by causing the model to lie about itself by poisoning the input corpus with falsehoods about ChatGPT’s own capabilities), it cannot grasp that there’s a paradox.

Good job, OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: