Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The argument I was responding to--made by the user crazygringo--was that GPT-3 trained on a model of the Windows source code is fine to use nigh unto indiscriminately, as supposedly Copilot is abstracting knowledge like a human engineer. I argued that it doesn't do that: that GPT-3 is a pattern recognize that not only theoretically just likes to memorize and regurgitate things, it has been shown to in practice. You then responded to my argument claiming that GPT-3 in fact... what? Are you actually defending crazygringo's argument or not? Note carefully that crazygringo explicitly even stated that copying little bits and pieces of a project is supposedly fair use, continuing the--as far as I understand, incorrect--assertion by lacker (the person who started this thread) that if you copied someone's binary tree implementation that would be fair use, as the two of them seem to believe that you have to copy essentially an entire combined work (whatever that means to them) for something to be infringing. Honestly, it now just seems like you decided to skip into the middle of a complex argument in an attempt to made some pedantic point: either you agree that GPT-3 is a human that is allowed to, as crazygringo insists, read and learn from anything and the use that knowledge in any way they see fit, or you agree with me that GPT-3 is a fancy pattern recognizer and it can and will just generate copyright infringements if used to solve certain problems. Given your new statements about Copilot being a "fancy pen" that can in fact be used incorrectly--something crazygringo seems to claim isn't possible--you frankly sound like you agree with my arguments!!


I think a crucial distinction to be made here, and with most 'AI' technologies (and I suspect this isn't news to many people here) is that – yes – they are building abstractions. They are not simply regurgitating. But – no – those abstractions are not identical (and very often not remotely similar) to human abstractions.

That's the very reason why AI technologies can be useful in augmenting human intelligence; they see problems in a different light, can find alternate solutions, and generally just don't think like we do. There are many paths to a correct result and they needn't be isomorphic. Think of how a mathematical theorem may be proved in multiple ways, but the core logical implication of the proof within the larger context is still the same.


Statistical modelling doesn't imply that GPT-3 is merely regurgitating. There are regularities among different examples, i.e. abstractions, that can be learned to improve its ability to predict novel inputs. There is certainly a question of how much Copilot is just reproducing input it has seen, but simply noting that its a statistical model doesn't prove the case that all it can do is regurgitate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: