Hacker News new | past | comments | ask | show | jobs | submit login

If I understand what they are looking at, they are looking at problems that are simple in the sense of having no local optima, but complex in the sense of having very high dimension.

ie. It's not a problem for the class of problems they are optimizing.

I've been out of that domain for awhile... but when I did a lot of optimizing / fitting in a high dimensional space with a function that had lots of local optima... I found the downhill simplex method with simulated annealing was most effective / robust.




There is a problem with our intuition about gradient descent that blinds us, a little bit.

For a local minimum to exist, that means that the function we're looking at has to have a local minimum when sliced in every dimension.

In our intuition, this is relatively common, because we usually imagine 2d manifolds embedded in 3-space (ie, "hill climbing"). And when there's only 2 dimensions, it's not hard to have a "bowl", where the function curves upward in both dimensions.

In fact, a "saddle point", where one of the dimensions curves up and the other curves down, feels "unlikely" to most peoples' intuition, because we don't often run into physical objects with that shape.

But, saddle points in higher dimensions "should be" far more common than local minima. You only need one dimension to curve downward to have a "hole" where the "water" can escape and continue flowing downhill.

Gradient descent easily gets stuck in 2 dimensions, and so we are often surprised by how well it performs in higher dimensions.


Yes, saddle points are far more common than local minima in high dimension. Unfortunately they're really good at slowing down naive gradient descent...


which is great as long as you choose your problems based on what your tools are good at


Local minima basically don't exist in high dimensional spaces, so in practice the tools work for a wide variety of problems.


Definitely not true. Not even close.


Not much of a response. Care to expand here? This is a well-understood principle in the existing body of research on convergence in high dimensional spaces.


This has been very lucrative for computers so far.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: