Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have produced 3D maps with lidars, drone mounted near-infrared cameras and with thermal infrared cameras.

You can tell apart grass and green carpet with a simple formula. You can coint trees without machine learning. Yoi can detect which plants are whilting, land that is wet from land that is dry. All of that is easy with the right sensors - becauae they have more data than an RGB camera can produce.

I know people that work with mutispectral imagery, they can tell you that pixel N45 has a spesific substance - concrete, steel or wood - jusy from spectra alone. Thye dont need to know what pixels around it are showing, or classify objects.



Agreed, I have a similar background with both LiDAR and vision for 3d reconstruction and mapping systems, plus I've designed some fairly impactful commercial multispectral software which is now widely used in the agricultural space. And vision can give you perfectly sufficient data to build world models and to localise yourself rapidly and robustly. What I believe is missing on the Tesla side is primarily on the navigation and 'social interaction' component of driving.

It's not like Waymo dropped a LiDAR onto the roofs of their vehicles and started driving unsupervised in traffic the next day. Nor Cruise, nor Uber. The sensing is just a small part of the whole system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: