Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a good example which illustrates that GPT (regardless of the number) doesn't even try to solve problems and provide answers, because it's not optimized to solve problems and provide answers - it is optimized to generate plausible text of the type that might plausibly be put on the internet. In this "genre of literature", pretty much every puzzle does have a solution, perhaps a surprising one - even those which are logically impossible tend to have actual solutions based on some out-of-box thinking or a paradox; so it generates the closest thing it can, with a deus ex machina solution of magically getting the right answer, since probably even that is more likely as an internet forum answer as proving that it can't be done. It mimics people writing stuff on the internet, so being wrong or making logic errors or confidently writing bullshit or intentionally writing lies all is plausible and more common than simply admitting that you have no idea - because when people have no idea, they simply don't write a post about that on some blog (so those situations don't appear in GPT training), but when people think they know, they write it up in detail in a confident, persuasive tone even if they're completely wrong - and that does get taught to GPT as an example of good, desirable output.



> because it's not optimized to solve problems and provide answers

The entire point of RLHF training is to do this. Every model since GPT-3.0 has been trained specifically for this purpose.

But of course the model can only generate text in one direction and can't take time to "think" or undo anything it's generated.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: