99% of the comments here have no iota of a clue what they are talking about.
There's easily a 10:1 ratio of "it doesn't understand it's just fancy autocomplete" to the alternative, in spite of published peer reviewed research from Harvard and MIT researchers months ago demonstrating even a simplistic GPT model builds world representations from which it draws its responses and not simply frequency guessing.
Watch the livestream!?! But why would they do that because they already know it's not very impressive and not worth their time outside commenting on it online.
I imagine this is coming from some sort of monkey brain existential threat rationalization ("I'm a smart monkey and no non-monkey can do what I do"). Or possibly just an overreaction to very early claims of "it's alive!!!" in an age when it was still just a glorified Markov chain. But whatever the reason, it's getting old very fast.
>published peer reviewed research from Harvard and MIT researchers months ago
Curious, source?
EDIT: Oh, the Othello paper. Be careful extrapolating that too far. Notice they didn't ask it to play the same game on a board of arbitrary size (something easy for a model with world understanding to do).
There's easily a 10:1 ratio of "it doesn't understand it's just fancy autocomplete" to the alternative, in spite of published peer reviewed research from Harvard and MIT researchers months ago demonstrating even a simplistic GPT model builds world representations from which it draws its responses and not simply frequency guessing.
Watch the livestream!?! But why would they do that because they already know it's not very impressive and not worth their time outside commenting on it online.
I imagine this is coming from some sort of monkey brain existential threat rationalization ("I'm a smart monkey and no non-monkey can do what I do"). Or possibly just an overreaction to very early claims of "it's alive!!!" in an age when it was still just a glorified Markov chain. But whatever the reason, it's getting old very fast.