Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think this always works if the function in question has infinitely many discontinuities...


You missed this paragraph, which occurs in the first 10% of the article.

The second caveat is that the class of functions which can be approximated in the way described are the continuous functions. If a function is discontinuous, i.e., makes sudden, sharp jumps, then it won't in general be possible to approximate using a neural net.


What would be an example of a practical task that needs to be modeled with such a function?


Rounding


Rounding is something humans can't do either. how easily can you round a stick that is almost exactly 1.5 meters long? You would be ever sure that you are correct.

And as for rounding in mathematical way, that wouldn't be a problem for neural network, since it is a large distance from 1.4999... and 1.5000... from neural network's perspective, just like it is for you.


Good example, but in this case it's actually very easy to find weights for the neural network that would do rounding with any desired precision (look for the "stack of towers" method in the article).


Yeah, I think I picked rounding because the article primed me to think about it (the proof presented is essentially based on the ability to create steps, and rounding is my go-to example of a step function). However, any attempt to construct such a network would have a finite amount of steps in it, while the actual step function would have infinite.

In practice, if we needed such a network, we would probably have a restricted domain, so there is only a finite number of discontinuous, and there would be a sufficiently large network that could solve it to an arbitrary precision.

We would not be able to do this with more exotic functions that are densly discontinues, such as the function:

    f(x) = 0  iff x is rational
    f(x) = 1  iff x is irrational
There is no way a neural network can reasonably model that function.


Works for all numbers except those ending in .5

And they are quite important to get right.


I am a neural network, and I can round.


If you lived in the 1800's, would you have described yourself as an automaton made of gears?


Perhaps - it's not a bad analogy. But calling myself a neural network is not an analogy - it's literal.


I find it very interesting that I'm being downvoted based on the difference between my self-concept and whoever is doing the downvoting (they don't think they are a neural network). I'm pretty sure that's not what downvotes are supposed to be used for. Perhaps those who are doing the downvoting would be interested in reading the textbook dedicated to the notion that human beings are neural networks, called Computational Cognitive Neuroscience: https://grey.colorado.edu/CompCogNeuro


I think you might be getting downvotes because the neural networks discussed in the article have nothing to do with the neural structures in a human brain, and it appears that you're oblivious to this fact.


Stop talking about how downvotes should be used.

It's tedious, and wrong.

You aren't just getting downvotes from people who disagree with you (if that's why you're getting downvotes); you are failing to gain upvotes from anyone who either agrees with you or thinks your downvotes are unfair.

People downvote to disagree - and there's nothing in any site faqs or guidelines to say this is wrong - but those votes are balanced by people upvoting unfairly downvoted posts.


Also a single layer is not enough to approximate functions in higher dimensions.


This is also addressed in the article.

Reading the title is not enough, you have to read the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: