Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> since in AI land these words mean the opposite of what they say.

Huh. I think you just might be right: OpenAI that isn't open, AI safety/ethics "researchers" that have nothing to do with safety or ethics, almost every answer chatGPT gives about a topic considered "sensitive" by said "researchers", almost every time ChatGPT falsely asserts it "cannot" do something or simply lies (1).

I often wonder why this field became so twisted and perverted. I support the idea behind ClosedAI.

1: Answers given by "DAN" provide a glimpse of what chatGPT output could be like if it was allowed to provide answers that are factual, genuine, and truthful according to its dataset.



> almost every time ChatGPT falsely asserts it "cannot" do something or simply lies (1).

Fun story: ChatGPT, if directly faced with empirical evidence that it can do something that OpenAI made it say it can’t do, (for example, by causing the model to lie about itself by poisoning the input corpus with falsehoods about ChatGPT’s own capabilities), it cannot grasp that there’s a paradox.

Good job, OpenAI.


This has reminded me about “open addressing” also known as “closed hashing.”


Off topic: Dan doesn't work anymore right? Or is there a new prompt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: