Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pick a prompt with a wide codomain but a single answer. That’s reasoning if it can get the answer right.


Your original claim was that an LLM can reason. And you say it can be proven by picking one of these prompts with a large codomain that has a precise answer which requires reason. If an LLM can come to a specific answer out of a huge codomain, and that answer requires reason, you claim that proves reasoning. Do I have that right?

So my question is, and has been these three replies: Can you give any example of one of these prompts?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: