Will this new neural net and hardware be capable of advanced object detection?
For instance if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping. Would a system like this just see an obstacle via radar and emergency brake?
Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models. I wonder how google and tesla systems would compare.
What I would find more interesting is if neural nets are even a valid way to built automotive components for self driving cars.
Safety critical components in most of the industries are currently very heavily built around the idea of being deterministic, so that you exactly know what will happen for a specific input and when it will happen. And of course also where the behavior from requirements can be found in code and all this process stuff. Neural networks from my point of view (only university knowledge) seem however more like blackboxes, where you don't exactly know how they will behave in the end.
I highly doubt it. They're not including Lidar, which means no high-resolution point clouds to be able to identify objects. In my opinion, this just doesn't cut it since ultrasonic is limited to relatively short range, and Radar doesn't provide the level of detail you need to really understand what is going on around the car beyond "there's some kind of object 100 feet ahead".
> Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models
As of August, Google's self-driving cars have 2 million miles of real on-road experience, while Tesla's Autopilot systems have 140 million. They're beaming back data to Tesla the whole time.
My understanding is because Tesla has to send their data over the cell network it is first heavily processed/reduced by the cars onboard computer before being sent back (in order to conserve bandwidth). So I am not sure if it's fair to compare the 2 company's datasets based on miles driven alone.
"if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping." Well, usually. But frequently they don't see it at all, or freeze and hit it whatever it is, or swerve three lanes over, or jam on the brakes in front of you.
It only has to be better than humans, not perfect.
False. It has to be perfect or Tesla gets sued. A plastic bag flies in front of your car, the car brakes hard and another one behind you hits yours and gives you whiplash. Lawsuit time.
For instance if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping. Would a system like this just see an obstacle via radar and emergency brake?
Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models. I wonder how google and tesla systems would compare.