Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if ChatGPT opinions/positions are stable. (well actually I'm pretty sure I know the answer is that they are not stable)

For example, if you ask it now about Electron in another session, does it maintain that it thinks Electron is garbage. Or does it just regurgitate whatever enthusiasts post on the internet and presumably both frameworks have some set of enthusiasts.

*replace Tauri/Electron with any 2 competing products if you want e.g. Android/iOS




I feel like internal consistency will be key for many creative generative efforts. Like the post on World Building with Chat-GPT. We want the previously generated world content to be available as context for future generation ideally so it can leverage additional context. For example, if it creates a fictional village with some level of detail and in the future you create a fictional person from said fictional village, can it pull details of that village from the original creation into the description for the new character consistently? I haven't experimented enough yet in this space to understand its strengths and weaknesses in this area.


Try asking it to explain how crypto is a scam. It will consistently contradict you and explain in detail why you're wrong and crypto is not a scam.


I assume it will, judging by a comment above where it praises C++ one second, then switches to praising Python when asked what the best programming language is. So to your point, it's echoing the fanboying.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: