Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
wnoise
on Oct 3, 2019
|
parent
|
context
|
favorite
| on:
Gradient Descent: The Ultimate Optimizer
Yes, saddle points are far more common than local minima in high dimension. Unfortunately they're really good at slowing down naive gradient descent...
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: