Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve just asked gpt3 to sum two large random numbers and it gave me correct sum of them. Then I’ve defined fibanachi like sequence (f1=1, f2=1, fn=fn_1 + fn_2 + 7) and it correctly gave me the value of 10th element. It’s not just statistical model to generate something resembling training set, it does understand training set, to similar extents as we understand world around us…


I don't see how your example demonstrates your hypothesis, though. Summing two numbers and telling the next number in the Fibonacci sequence would be expected from a deep and complex statistical modelling of the existing internet data.


Both of these examples show GPT not barely approximating outputs (which doesn’t exist in real worlds for these inputs) based on training set but understands algortihms and able to apply them. I don’t believe our brains are doing anything different from that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: