Hacker News new | past | comments | ask | show | jobs | submit login

Why not use two cameras for training AI then?



Because they’re using existing data. You need thousands, maybe millions of images to train an AI to recognise something well, and only recognise the right characteristics. No-one has the resources to go take all those photos themselves.

Anyone know of a visual recognition AI being trained also with depth data? Would be interested to see what difference it makes.

This relates to something else I noticed differently about my daughter learning. You can show her one photo of a lion, from one angle and she will recognise other lions later on, at different angles. I think she must have seen enough animals already from many angles to have generalised their shape and then be able to presume the new animal is similar and just see the new characteristics like a mane. Something very different is happening in Human brains!


You are right and it would be interesting to quantify how much it could improve AI if datasets were binoculars.


pictures uploaded to facebook and google were only taken by 1 camera :P




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: