Hacker News new | past | comments | ask | show | jobs | submit login

"Responding to the context provided" is very vague. I could argue that I'm doing exactly that right now as I'm writing this comment. It does not imply not being able to e.g. link ideas logically.

With respect to interrogating GPT if it does something wrong - the reason why people do it is because it works. With GPT-4 especially, you can often ask it to analyze its own response for correctness, and it will find the errors without you explicitly pointing them out. You can even ask it to write a new prompt for itself that would minimize the probability of such errors in the future.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: