Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given how often I get top results that are clearly machine-generated content, I anticipate a lot of broken overfitting will happen in the near term.

If you train NLP on text generated by NLP, you're gonna have a bad time.



Now I’m curious: what would happen if one repeatedly trained GPT-3 on text generated by a “previous generation” GPT-3? (Similar time successive JPEG saves)


I'm pretty sure it would drift further and further from something a human would recognize as intelligible.


It would be a kind of DeepDream for NLP




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: