Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.

But not software and they don't even have confidence in their current implementation?

It's not surprising considering the recent announcements by the regulators, but that's quite a step.



> But not software and they don't even have confidence in their current implementation.

I think it's quite forward thinking for them to include the hardware when it's ready, knowing they can update the software in the future. And I'd prefer them to be conservative in rolling out the software for an automobile. Not something I'd want to see beta tested on the highway.


>Not something I'd want to see beta tested on the highway.

It's not like the previous-gen cars aren't on the highway at the moment. Or has "autopilot" been deactivated in the meanwhile?

I understand their step to add better sensors for the future (even though it seems difficult without LIDAR). But disabling autopilot for new cars with better sensors and keeping it enabled for older ones seems like a strange step.


i'm guessing that the data is just plain too different (probably no more mobileye sensors, for example) and not worth adapting to the older ML models (which itself would require extensive testing) when the new system is going to end up with different ML models anyways.

[edit] Tesla previously hired Jim Keller (chip designer) into the autopilot team. considering the kinds of things he may be working on, i'd be surprised if the differences in either the sensors or GPUs aren't significant.


Fair enough.


This means they don't have confidence in running the same tried and tested programs on a completely new platform they just launched. Which makes complete sense, wouldn't it be irresponsible otherwise?

It gives them time to confirm a new technology and update their maps of areas with information from the new wavelengths they're just now gathering.


> It gives them time to confirm a new technology and update their maps of areas with information from the new wavelengths they're just now gathering.

Fair enough. I just tried to do a diff between the versions:

Current implementation [0]

* Camera module in the front

* Front-facing Radar

* 12 ultrasonic sensors

New implementation:

* 8 surround cameras

* Front-facing radar

* 12 ultrasonic sensors (updated)

[0]: https://www.quora.com/What-kind-of-sensors-does-the-Tesla-Mo...

So the difference is basically just a few extra cameras and updates to the sensors. It doesn't seem like a huge step or completely new platform - at least when looking at the components.


What about the 40x faster processor with neural nets thrown in there somewhere? There's a bit more than just new cameras/sensors.


>neural nets thrown in there somewhere?

What techniques were they using to process the data before? Surely it was statistical learning. And if they weren't, their competitors certainly were.


For version 8 of their software, the camera is now the primary sensor. And that's where the biggest hardware difference between the models is.


i don't think they ever believed the previous sensors were capable of "full autonomy". the reason they didn't throw in all these sensors sooner is that it would have been a premature optimization considering their lack of experience and also pretty expensive in terms of the hardware (which includes the GPUs necessary to make use of the sensors).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: