Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part of the problem I think is that the image is not fully used; I mean, generally these systems consist of a black-box routine that extracts interest points and then passes it onto to a SLAM routine, which inturn keeps an estimate of the car state and the interest point positions. There is no "physical" model of the world being inferred from images, and I imagine this makes things rather tricky (and also why a LIDAR is so much more useful).

AFAIK deep-learning hasn't really brought much change to this manner of doing things - the mapping part esp.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: