Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is this different from humans? What magic are you looking for, humility or an approximation of how well it knows something? Humans bullshit all the time when their pattern match breaks.


The point is, chatgpt isn’t doing math the way a human would. Humans following the process of standard arithmetic will get the problem right every time. Chatgpt can get basic problems wrong when it doesn’t have something similar to that in its training set. Which shows it doesn’t really know the rules of math, it’s just “guessing” the result via the statistics encoded in the model.


I'm not sure I care about how it does the work, I think the interesting bit is that the model doesn't know when it is bullshitting, or the degree to which it is bullshitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: