Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans are dependent on their input data (through lifetime learning and, perhaps, information encoded in the brain from evolution), and yet they can produce out of distribution information. How?

There is an uncountably large number of models that perfectly replicate the data they're trained on; some generalize out of distribution much better. Something like dreaming might be a form of regularization: experimenting with simpler structures that perform equally well on training data but generalize better (e.g. by discovering simple algorithms that reproduce the data equally well as pure memorization but require simpler neural circuits than the memorizing circuits).

Once you have those better generalizing circuits, you can generate data that not only matches the input data in quality but potentially exceeds it, if the priors built into the learning algorithm match the real world.



Humans produce out-of-distribution data all the time, yet if you had a teacher making up facts and teaching them to your kids, you would probably complain.


Humans also sometimes hallucinate and produce non-sequitors.


Maybe you do, but people don't "hallucinate". Lying or being mistaken is a very different thing.


Computers aren't humans.

We have truly reached peak hackernews here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: