Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It absolutely FAILS for the simple problem of 1+1 in what I like to call 'bubble math'.

1+1=1

Or actually, 1+1=1 and 1+1=2, with some probability for each outcome.

Because bubbles can be put together and either merge into one, or stay as two bubbles with a shared wall.

Obviously this can be extended and formalized, but hopefully it also displays that mathematics isn't even guaranteed to provide the same answer for 1+1, since it depends on the context and rules you set up (mod, etc).

I should also mention that GPT-4 does quite astoundingly good at this type of problem wherein new rules are made up on the fly. So in-context learning is powerful, and the idea that it 'just regurgitates training data' for simple problems is quite false.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: