Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve found that the behavior of ChatGPT can vary widely from session to session. The recent information about GPT4 being a “mixture of experts” might also be relevant.

Do we know that it wouldn’t have varied in its answer by just as much, if you had tried in a new session at the same time?




There is randomness even at t=0, there was another HN submission about that


I tested it several times, new chats never got this right at first. I tried at least 6 times. I was experimenting and found that GPT4 couldn't be fooled by faulty proofs. Only a valid proof could change its mind.

Now it seems to know this mathematical property from first prompt though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: