Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It can learn any input to output mapping. What more do you want?


The article doesn't say anything about whether it can learn it, just that it can represent it.


The proof is constructive, so actually you can. Just create a few neurons for the "tower function" and the correct output weight at every point you want to map.

If you mean learn functions efficiently with few parameters, or in ways that generalize well, the article makes no claims about that at all. There are two "no free lunch" theorems. One of which says that it's impossible to guarantee any method will generalize well on all problems, and the other that says it's impossible to guarantee any optimization method will work well on all problems. You just have to make strong assumptions like "my function can be modeled by a neural net" or "the error function is convex."

However neural networks are agnostic to the optimization algorithm you use to set their weights. There are many different ones.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: