Hacker News new | past | comments | ask | show | jobs | submit login
Four hundred miles with Tesla’s autopilot forced me to trust the machine (arstechnica.com)
206 points by jonbaer on May 25, 2016 | hide | past | favorite | 146 comments



From the article: "Auto-steer had to wait for the freeway, where I toggled it on. Unfortunately, I toggled it right back off. The portion of I-45 near my home is undergoing a multi-year $6 billion rebuild, with a long stretch spanning several exits being destroyed, reconfigured, widened, and otherwise "improved." The result is a multi-mile segment of road that snakes back and forth across what will eventually be both sides of the finished project. For now, there's no consistent surfacing, no real lane markers, and nothing for the lane-keeping system to see."

"Lane keeping was almost excellent. The car kept itself squarely planted in the middle of the lane without ping-ponging between the edges like some earlier systems did. The car would still sometimes "lunge" at exit ramps when lane markings would fall away to the right or left, though the behavior was never enough to make me grab the wheel and disengage."

So Tesla just has lane keeping and smart cruise control, like everybody else. NHTSA Level 2. But they're way ahead in PR.

Google can handle construction, changed lanes, and offramps, city streets, and follows navigation directions. They're NHTSA Level 3.

Handoff from automation to human in emergency situations is still a big problem. Here's such a case. [1] Car with autopilot rear-ended stopped car on freeway. Why? The driver tapped the brake while approaching the stopped car. This disengaged the autopilot, including the emergency braking function, under the assumption the driver was now in charge. Tesla claims the driver should have understood that, as of firmware version 6.2, the auto-braking function was disabled by any manual braking.

[1] http://arstechnica.com/cars/2016/05/another-driver-says-tesl...


Having emergency braking turn off in the event of a manual brake tap is definitely a failure on Tesla's part. Emergency braking should not only not turn off it should brake harder (to the expected traction limit) and then, of course, engage abs if the traction limit is exceeded. Emergency braking should be on unless explicitly disabled (just like abs and traction control are auto on in cars that have them).


> Having emergency braking turn off in the event of a manual brake tap is definitely a failure on Tesla's part.

Yes, but the failure mode is not so easy to see prior to use. It's due to how humans react to the new feature itself, not because it's a bad feature, or a misunderstanding about how users 'normally' react. User's may incorrectly anticipate a behavior (such as in this case), or they may expect the feature to fail so unnecessarily take control (cause of some airline disasters).

This is a general problem of computer human interaction. I'd say it falls under the discipline of UX design. Knowing that breaking is an automatic reaction should be a clue that it will make it a poor mechanism to understand user intent. You then know that you should be looking at other mechanisms to more accurately capture user intent.

- Perhaps the auto-breaking feature should have better defaults such err towards enabling rather than err towards disabling.

- Perhaps the control for toggling auto-breaking should not also be the breaking mechanism itself.

- Perhaps context is now relevant, and user input should be interpreted differently when auto-breaking is on and there is an immanent collision.

I think it's important to pay special attention when making any interface change to a well established user behavior, and acknowledge that new behavior leads to unstable and unreliable input. In these cases put extra effort into finding the path of least cognitive burden when the accuracy of user input is the most critical (i.e. immanent collision) even at the expense of adding cognitive burden elsewhere.

Cognitive burden is a critical consideration in all system design from airplanes to cell phones, but tragically is often overlooked.


> It's due to how humans react to the new feature itself, not because it's a bad feature, or a misunderstanding about how users 'normally' react.

It is not easy but it should be done. Keeping a car steadily between lanes is not easy either, or take self driving cars that is hard. Besides, research into some of this has already been done with aircraft and pilots.

> It's due to how humans react to the new feature itself, not because it's a bad feature, or a misunderstanding about how users 'normally' react.

It is a bad feature. Feature doesn't exist in a void, it exists only because humans are using it. "Human presses the brakes, what to do next?" is a feature with a human in the loop.

> Cognitive burden is a critical consideration in all system design from airplanes to cell phones, but tragically is often overlooked.

Exactly. Moreover "mode confusion" (human not understanding what mode the aircraft autopilots or other systems are in, and then reacting assuming a different mode) is a common cause of crashes.

There were tools and studies specifically geared towards mode confusion:

http://shemesh.larc.nasa.gov/fm/fm-now-mode-confusion.html

Asiana 214 is a recent example of pilot assuming the autothrottle was on, but it wasn't, so plane crashed. It is a bit like here, the user assumed Tesla would slow down some but then stay in the automatic mode.


I believe you mean auto-braking, not auto-breaking, in all the instances you used the term, right? That is, we want the vehicle to brake, but not break.


So you're saying my subliminal messaging was too obvious? ;)


>> Emergency braking should be on unless explicitly disabled

I'm trying to think of a scenario outside of a mechanic's garage that emergency braking would ever be disabled. This should really be mandatory on all cars.


It's surprising that Tesla made that choice.

BMW's anti-collision system is on even during manual driving.[1] BMW will brake even if the driver is pressing the accelerator. So will Volvo. Top Gear compares the various systems as of 2015.[2] Volvo did best; Mercedes didn't stop soon enough and hit the dummy car. VW's city car only does this up to 18MPH.

Maybe Tesla has a false-alarm problem that BMW and Volvo have overcome.

More to the point, this behavior needs to be standardized, or people shifting from one car to another will do the wrong thing. The National Transportation Safety Board is working on this.[3] In a few years, anti-collision may be as standard as anti-skid is now.

[1] http://www.theverge.com/2015/1/5/7494989/bmw-i3-self-parking... [2] https://www.youtube.com/watch?v=PzHM6PVTjXo [3] http://www.ntsb.gov/safety/safety-recs/recletters/H-15-008-0...


I love, love, love the dynamic cruise control on my 2014 Mazda 3 Astina. I stretched my budget for this, our 2nd car, as I was dead set on getting next generation safety features. Got an ex demo manual, couldn't be happier with the overall package.

Even though my car is a 6sp manual, the automatic distance keeping is making the commute much more relaxing. I can't recommend it enough.

Funnily enough the radar system is separate to the AEB, which uses cameras. AEB stands for automatic / autonomous emergency braking.

As I'm someone who likes to know how stuff works, especially in cars for familiarity should a real event occur, I did set out to experience how AEB works. I rigged some large but light cardboard boxes, reversed down the road a bit, accelerated quickly to around 40 km/h, kept the throttle pedal steady as though if to drive through them.

All times, although it felt like surely it wouldn't, the car intervened at the very last second. The first time it drove through them by a metre. The next two times the car stopped before impact. System uses 100% brake force engaging the ABS and also putting on the hazard lights for good measure.

I came out highly impressed. It ought to pick up cars better, but I'm not about to actively try that.


In some urban environments it is necessary to nudge very slowly through pedestrian crossings to thread through gaps; if the car brakes at the sight of any perceived obstacle and waits for a totally clear right of way, there are certain crosswalks and right turns in Chicago you will not pass for several hours.


Well if you're going under 5mph then it's not really 'emergency braking' any more.


Good point. For those situations, there should be a separate override button to disable the feature, just like there often is one for traction control (on normally, button lights up when disabled). Or for the passenger airbag back before weight sensors replaced a manual override.

But otherwise, even when in manual driving mode, the emergency brake feature should be active and watching for impending collisions.


Just make the emergency brakes kick in only above 5km/h?


It is, but if when it starts braking you press the brake pedal, it turns off under the assumption that you want to manage braking.


That's why it should be a separate button, so that it never assumes you want to manage braking unless you specifically turn the system off. Just pressing the brake pedal doesn't mean you want automatic braking to go away.


> I'm trying to think of a scenario outside of a mechanic's garage that emergency braking would ever be disabled. This should really be mandatory on all cars.

So there's an accident, and I swerve to avoid. I think I'm going to make it but the car isn't certain so it brakes, and it brakes really fucking hard. At this point who is responsible for what happens? Breaking like that fundamentally changes the behaviour of the car. If I had just braked, maybe I stop in time maybe I don't - I chose another option and it is completely valid.

The very idea of that kind of behaviour terrifies me. I'll take a 100% automated car with pleasure, but I have no interest at all in this hybrid control concept. I would turn that shit off on delivery.


> So there's an accident, and I swerve to avoid. I think I'm going to make it but the car isn't certain so it brakes, and it brakes really fucking hard. At this point who is responsible for what happens?

I'm having problems understanding your scenario. How exactly does the car breaking prevent you from swerving? The brakes should only stop your car mid-swerve, and you wouldn't collide. The only scenario when speed is an advantage when swerving is when you're trying to needle into an opening in the next lane and you need to match the speed of a 3rd car


When swerving you should not apply the brake. It will cause you to lose control in the swerve.


Not with a decent ABS system, and all the cars that have automatic braking have pretty good ABS.


Braking uses all the traction between your tires and the road to slow down your car, and you will be unable to turn. In the case of ABS, you will turn at a slower rate than if you hadn't been braking.


Which promptly trains all drivers into a "meh" attitude toward braking, leaving it to the computer, and then surprised into paralysis (and subsequent crash) when it doesn't work.


law of unintended consequences. I can even imagine a net increase in accidents until the technology is mature enough to actually take over full responsibility. So far all of the assistive technologies were straightforward to understand and control - by that I mean ABS, ESP, velocity keeping, and autobraking. You were never tempted to think that the car will drive for you with these. But the moment you add lane keeping and take over the steering wheel, there's no immediate obligation to pay attention anymore. It's deep in the uncanny valley and I'll have none of it until we have fully autonomous cars.


Another user linked a Fifth Gear video testing the auto-braking systems in Volvo/Mercedes/VW cars. In it, they say the auto-braking systems only engage by braking hard at the last second specifically so you don't get blasé about it as if it were a smooth controlled braking. It's not a mode you want to use because it really feels like you're going to crash. https://www.youtube.com/watch?v=PzHM6PVTjXo


that's interesting, I can see the reasoning behind, but what happens on a slippery road?


That sounds like exactly what happened in the crash scenario. The driver tapped the break, said "meh, the car will handle it" and then plowed into the car in front of them.

I'm pretty leery of these systems at the moment due to people using them like this. That's just not what they are ready for at the moment and people's unrealistic expectations are leading them down dangerous paths.


Is this not basically what Autopilot is doing for driving in general?


Humans can be trained into not having surprise paralysis, for instance, through a simulation. It can be part of the requirements for driving permits.


fog / rain / low visibility (potentially fooling the sensor into thinking there's an object in front of you), cars behind you?


That raises an interesting question: It is 2016, do we ( or do we not ) currently have economically feasible technology to place into cars that can reliably and accurately determine when you are about to hit a solid object or not?


Define "Solid object".

In the lab, opaque textured objects in sufficient lighting are pretty easy to localize by a variety of methods. Reflective objects are quite a lot harder, but there are methods that work. Reflective objects through distortion like rain/fog... harder, but often doable. In low light? Depends.

The weird thing with environment modelling in computer vision is that there are a lot of overlapping methods, but sensor fusion of these methods is rarely attempted. Most robots end up with a pair of laser rangefinders or a pair of cameras or something equally simple to program. Creating a high-reliability automated vehicle that can bypass all the edge cases where these things fail is another thing entirely. You want overlapping, inter-reinforcing models of the world, with algorithms that not only identify items in a well-calibrated system, but calibrate the system in real time based on items identified.

My lab was really excited about replacing stereoscopic computer vision with monocular computer vision for 3D modelling, but the opposite direction is where you want to go if you care about as many 9's of reliability as possible and hardware is cheap. You want a few dozen cameras with fixed orientation, you want parallax laser rangefinders in a variety of bands, you want time of travel LIDAR units, you want a {GPS + 3DoF gyroscope + 3DoF magnetometer + 3 DoF accelerometer + barometer + thermometer}†, you want ultrasonic and radar rangefinders; You want everything you can get. You want enough lenses that if one gets obscured by water, hey, there are others that will pick up the slack, and do it without bothering the driver.

You want to shrink the 'no coverage of object in environment' surface as small as possible.

Nobody seems to respect this on the academic software side. Novelty, sure, but people don't seem to earn grants publishing extra 9's using a mashup of a bunch of pricy sensors. 'Less is more' is not how you get the data needed to make things reliable, it's how you get best-case detection rather than worst-case awareness.

†This combination is now found in quadrotor IMUs. Each individual sensor is inaccurate enough that it would be useless for automation, but combine them to check each other's deficits and you get a remarkably elegant system.


Where the solid object is a car or truck, yes. Phased array radar is good enough for that, and there are lots of deployed units on the road. Telephone poles, deer, and pedestrians - maybe.


The canonical example would be a plastic bag on the freeway.


The point is any anti-collision system should be smart enough to tell the difference between a car and a bag. It should NEVER let you plow into another car just because you tapped the brake. It shouldn't even matter if autopilot is on or off. Only if this was a child or animal, could I see the bag argument.

Also, it was a UI failure not notifying the driver clearly that auto-pilot has disengaged. Even the primitive cruise control in my car has a big green light, that turns off when you tap the break. The more autonomous a car is, the more important it is to clearly notify the driver of control handoff.


Well of course, but there are always edge cases. Large rock, or a football? A person, vs. a cardboard cutout of a person? An empty plastic bag, vs. heavy bag with clearly something in it? A human can instantly tell the difference between all three, but for a computer to do it you would need some extremely good image recognition(in the case of a loaded bag the problem becomes near impossible, it's hard enough to just recognize something as a bag, being able to tell if it's weighted requires spatial awareness that no one proposed how to solve yet). In the meantime out best of the best image recognition software can't tell a zebra and a sofa in a zebra print apart. So for now, while we wait for the algorithms to improve, there should always be an easy (with an emphasis on easy) way to disable automatic breaking.


Granted, ABS invalidates the following argument a little, but not entirely. The traditional wisdom when faced with bad road conditions and an impending accident is that maintaining control through steering is much preferable to braking, which lowers the speed with which you can manoeuvre out of the situation.

It all depends so much on various variables so I find it hard to categorically say "it should always be on."


You are going 80mph on a motorway, and a large shopping bag blows into your way - having the car do an emergency stop in that situation would be extremely dangerous, so a gentle tap on the brake to tell that car "yes, I am in control of the situation, there is an obstacle in front but I don't mind hitting it" is the only solution.


Signalling the car "yes, I'm in control" is a good and only solution, however, tapping the brakes is (a) definitely not the only way to signal that; (b) not a intuitive way to do that if you really want to make this signal but aren't sure how; (c) not a safe interpretation, as it's likely that people will tap the brake with other intent than the signal "it's ok, I got this".

An appropriate way to signal it would be distinct from any emergency controls and also show a highly visible signal that you've turned the system off and are in a different "mode"; even a simple separate dashboard switch with a light would be better for it than the brake pedal.


I can think of two top of mind.

A deer crossing the road in icy conditions. Its better to hit the deer than lose control.

a large tumble weed blowing across the road with someone tail gating you.


I would not want to write that if statement.


Lane Keeping on Tesla is miles ahead of Lane Keeping on other cars. It centers in the lane, slows down and anticipates curves, etc. It is really a game changer vs. traditional "lane keeping"


I've never driven a Tesla to compare, but the 2017 Audi A4 does lane keeping much better than I expected and would say the A4 is comparable based on what others have said about Tesla.

I think Mercedes has a pretty comparable system as well, but can't speak from my own driving experience.


It didn't include Audi, but FWIW the only head to head comparison I've read put Tesla far ahead of Mercedes, Infiniti and BMW - http://www.caranddriver.com/features/semi-autonomous-cars-co...


I'm still wondering what an automatic car(or merely one that has an automatic lane-keeping system) is going to do on a road like this:

http://gfx.dlastudenta.pl/photo_new/d90/b27/097/537/102764

Or like this:

http://img.sadistic.pl/pics/a87df587b218.jpg

Not everyone lives in countries with amazing road infrastructure, the pictures above were taken in a country that is a member of the EU,not some far Russian province.


Bring the car to a halt, and ask the user for help?


I've riden in a Tesla and I can assure you it does not anticipate or correctly react to all curves. For example, the 2016 Tesla S totally over reacted to this slight jog in 101, terrifying all of us, and forced the driver to take over. IMHO, not yet game changer.

https://www.google.com/maps/place/US-101,+Mountain+View,+CA+...


> Google can handle construction, changed lanes, and offramps, city streets, and follows navigation directions.

Oh that's great news! Now where can I buy a Google car again?

There's a big difference between a general production-ready car and what's basically an internal-use-only car.


Hmmmm highway conditions vs 25Mph, the top speed of Google's self driving cars. The Tesla Autopilot is a lot more impressive as it goes a lot faster than 25Mph.


It's also really good at 25Mph - when the roads are not busy in my area I take it out and drive slower and on curvy roads that might be normally 35-40 I slow down autopilot to say 25 and it's really good. At speed there are still a few turns it misses. A year ago at speed it couldn't do even half of what it can do at speed today. So it's also improving with every update


Google went for a 25 mph top speed for regulatory reasons (the car is basically classified as a golf cart that way), not for technological reasons.


I would be absolutely fucking terrified to engage autopilot at all in Mountain View, let alone at 25mph.

It's a great feature, but it's a great feature for boring driving.


> it's a great feature for boring driving

I'd much rather have a "boring" driver on the road than deal with aggressive drivers.


Agreed. My point was that it doesn't deal well with "exciting" driving. Given a long stretch of mostly empty interstate highway Autopilot is great (and keeps it suitably boring/low stress). It does miserably in suburban areas (let alone urban areas).


I was chatting with a friend who runs a haulage company recently. Some of their new trucks come with auto-braking features. He told me they had to have this disabled. The truck would be driving along in the slow lane on a motorway minding its own business, and suddenly a car would dive across the front of the truck from the fast lane, aiming for an exit. The truck would go to maximum auto-brake, trying to avoid a collision that wasn't going to happen, even though the car was crossing only a few feet in front. Not only dangerous for the manual truck behind, but it caused too much damaged cargo and corresponding insurance claims. Apparently car drivers do this all the time. Anyway, I thought it was interesting that the correct action here was to not auto-brake in some situations that were clearly dangerous.


The car should react to something closing up rapidly in front of it, which is not the case in that situation. Seems like just a poor implementation.


This is how my Subaru behaves, and it feels right. The only false positive stopping I've encountered is someone coming from the opposite direction turning left across my path. And even then, it only happens when they're close enough that i they weren't moving, it would be a problem.


How far out are we from truly autonomous personal vehicle travel? I want to set my route and play a game or work or sleep. I want the vehicle to automatically stop and charge when it needs to. If I'm feeling like it, I can get out and use the restroom.

I would be so much more likely to make frequent cross country trips to visit family and so much less likely to ever book a flight unless I'm needing to cross water. I wonder how this would hit the airlines in the US.


The tesla gets the easy parts down. Highway driving is largely uneventful straight travel. It's the rest of it that's hard, detecting stop signs and lights, pedestrians and all the other corner cases that make it significantly harder for the tail end of the problem. Google's press releases about their cars make it seem like it's a few years off but I'd actually expect it to be about a decade instead, more because of testing and regulatory changes that need to happen before we see them sold. A Tesla has the capability to do the worst of that long haul trip, but it has the charger problem right now that they're trying to get more of them out there for it.


And I wouldn't call heavy rain or snow a corner case everywhere outside California. I wonder if we'll tune our infrastructure to help autonomous cars drive in low visibility situations where human drivers are often guessing the lanes. For example digital markers that help the car orient itself without vision.


I would hope my tax dollars aren't spent embedding digital markers in public roads so that rich Tesla owners can sleep while being transported.


What about your tax dollars being spent embedding digital markers in public roads so that the roads will become much safer and more efficient?


Similar things could have been said about roads: "I would hope my tax dollars aren't spent on building roads so that it's quicker for rich car owners to travel between cities"


Google's cars have logged a lot of miles, but only over a few miles of actual road in Northern California--road for which Google has extremely detailed maps already in place.

It's going to be a long time before a Google car can just drive anywhere you tell it to.


It might happen significantly sooner if we start modifying roads and intersections to be more robo car friendly.


That's gotta be a bigger project in terms of man power and money than anyone has invested in self driving cars already. We need more testing to even establish the problem areas and figure out how to actively solve them.


Google has more data. CPUs, GPUs are more potent. Yes society might take SDV in account now, a parallel 'iot' road information system. One thing that is to worry, IIRC, is that gCar sensors are only good for that climate and fail rapidly under other conditions. Fog, rain, snow have no good technology for now.


Google streetcar has covered a lot of miles, so they could have good maps of most areas.


The mapping for a self-driving car needs to be much more detailed than what you see in google street-view. It includes things like the height of curbs. Here's an overview: http://spectrum.ieee.org/automaton/robotics/artificial-intel...

Also, errors in street-view are minor annoyance; for a self-driving car, an error in its internal map could be deadly.


That would be fine with me. If I handled the local roads and the car drove only on the freeway that would solve like 80% of the problem.

And because such a thing would be a huge win for trucking companies I fully expect the US will eventually instrument the entire interstate system to make this happen. (i.e. there is enough money in it to make it happen.)

Fully autonomous local driving is not likely to happen till we have real AI that can understand things like: the guy down the street is unloading a construction vehicle, so make a U-turn and go down a different street.


Well, it's always "five years off." Has been for a century now ;)


I'm guessing it would impact airlines minimally. I'm in California and my family is in Illinois. Even the best route, speeding the whole way, it takes me twenty hours. Hell, just to go to San Diego from San Francisco is 7 hours minimum (took me 12 last Friday).

People fly to save time, and if youre going to a hub city (I.e. Chicago, Los Angeles, New York, Atlanta, etc.) It will likely be cheaper to fly as well, I just booked a flight from San Francisco to Chicago for $148 (driving would easily be $250).


If you are going from one non hub city to another non hub city, the overall air travel time goes up substantially (as well as the cost). It will still be a good deal faster than driving cross country, but if driving is relatively stress free and I can do it through the night, I'm going to be far more likely to skip the stressful lines, running between gates, delays and high prices of air travel. I don't think I'll be alone.


So, like taking a coach bus? In Europe it is very common to do so.


In areas of the US with high population density, bus service is pretty okay (more or less the Northeast Corridor).

The bus service from Houston to Miami is a little less okay, but it's because it's 1900 kM and won't have a lot of users (Google suggests public transport would take 34 hours, a train and a couple of buses).


Air travel is per-person, vehicular travel becomes more attractive (price-wise) for families or larger groups.


Plus you have a car to use at your end destination (because you've brought it with you).


Well, how about you flyin' there and the car gets there all by itself the next morning so that you can drive around without renting one? Now that would be cool.


I don't know if it makes sense to pay for both driving there and flying there. For some people, maybe.

Would be interesting though - I can imagine a Mad Max-like future where roving bandits set traps for autonomous cars being sent on errands by the wealthy elite.


Not sure what you're paying for in the electric car. Charging at the Tesla stations is free. If it can plug itself in to the charger, it's completely feasible to drive itself across the country.


Miles on the car. Given how the article states that the tires alone are good for 5k miles and cost $1600 to replace, that's 32 cents per mile just for the tires. If you're going somewhere far enough away that it's worth your money to fly instead of drive, it'll also be worth your money to rent a local car rather than have your Tesla drive to meet you.

I am not a Tesla owner, so please correct me if I got the cost/mileage of the tires wrong.


That's true for the 21" wheels. The tires on the 19" wheels last much longer.


> tires alone are good for 5k miles

Are you sure about this? Tires usually last 30-60k miles.


> I'm guessing it would impact airlines minimally

Maybe by itself, but when you factor in how awful air travel is getting, it might have a bigger impact than it otherwise would. I'm planning a flight from Austin to Chicago later this summer and I'm seriously considering driving because my last two experiences flying have been so awful thanks to the TSA.


7 hours is a good night sleep. Combine self driving cars and coffin hotels and SF to LA would be enjoyable.


Sleepbus is exactly that (with the self-driving car replaced by a bus): http://www.sleepbus.co/

Of course you have to pickup and dropoff at their location, and you don't have a car to use at your destination.


To be fair not $250 in an electric car


Probably not if you factor in depreciation on an expensive self-driving electric car. Looks like SF to Chicago is at least 2000 miles.

(Which also makes me wonder how you can do that trip in 20 hours. Speeding indeed :)


I can only assume as autonomous travel increases and incidents decrease, the speed limit would increase. Obviously battery technology still has a ways to go to be able to last more than a couple hundred miles running 100 MPH+


Speed limits depend more on the road (banking in corners, surface conditions, imperfections) and the car (what happens to the vehicle if an accident does happen) than the drivers, so I wouldn't expect major increases in speed limits.


I doubt that or else you wouldn't see the same stretch, and grade of road get two different speed limits depending on what side of the state border that you are on.

Additionally, on the intrastate highways, the speed limits are set by local districts so local districts who like making money will often change their speed limit for any highways that go near by them so they can speed trap people.


I dunno, In newer cars I have some sense of dislocation like I'm not really going as fast as the speedometer says.


Clearly, I'm mistaken on the exact times.

Last time I drove it took me four days, but I only drove 8 hours a day and took a long way around.


In europe we have this already. It is called railway.


Which for most city to city travel will end up being dramatically less efficient and far more expensive than electric, autonomous cars that can go 400+ miles on a charge - unless you're only going from large metro to large metro over a particularly long distance.

For travel within cities, autonomous electric vehicles will ultimately surpass rail travel. That will happen within several decades.

The cost of car use and ownership will plunge. Software and constantly improving batteries will eat the expense out of the heart of the machines.

What's the cost and usefulness to run high speed rail between two mid-sized Romanian cities that are 60km or 100km apart? Right, it's wildly impractical. Inexpensive, autonomous, electric vehicles however are the future and will be extremely practical for all purposes except very long distance travel between two major cities. No other scenario will justify spending the vast sums of money required to install and upkeep high speed railway - and that cost will only continue to climb with wages and security threats in Europe; meanwhile electric, autonomous vehicles will bring the cost of vehicles down dramatically over time.


Wait what? You have it the other way around. Rail will still be dramatically more efficient and cheaper than electric autonomous cars. And inter-city subways will still be more efficient and cheaper than electric autonomous cars. They have higher density, use less space (as opposed to huge roads/highways everywhere), and pollute less.


There will be a combination. Autonomous cars replacing taxis and buses; light rail/MRT for the last 5 miles in dense city centres; heavy rail for rapid medium distance transit (100km < x < 400km); planes for long distance transit.

If anyone thinks that there is the road capacity for a million self-driving cars to drop passengers during every morning rush hour in Mid/Downtown Manhattan or the City of London, they are deluded.


> What's the cost and usefulness to run high speed rail between two mid-sized Romanian cities that are 60km or 100km apart?

What's the cost of building from scratch and maintaining year-on-year a road along the same line, including vehicle and operating costs, before subsidies? That is - what would it actually cost you to drive a car if your taxes weren't paying for it?

Hint - it differs for nearly every situation, and even the idea of high speed rail - 125mph or higher - is not necessarily useful in all situations. A decent commuter line with good bus links to every station is potentially more useful than a high-speed line which zips past dozens of towns.


What about the already congested roads, missing parking places in (european) cities and lastly the expensive maintenance for roads?


Railway cannot pick me up from where I are and cannot deliver me to where I want to be - it can cover the long distance parts, but I need another means of transportation to go to the railway station and from it.


Seriously, we should find a solution for all those poor stranded railway passengers. Oh, wait, what are those tramway, buses, cabs, metro, rental cars, bikes and scooters are doing in front of the station. Hmmmm?


This was a good presentation by the Google team: http://www.youtube.com/watch?v=Uj-rK8V-rik

I believe they say in there that they got highway driving down really early on. They have been focusing on urban driving for the last few years.


According to Chris Urmson, lead engineer at Google's driverless car project, less than 5 years.

https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...



Technically not a personal vehicle but it does meet all your other criteria. https://www.greyhound.com


I want a car to drive me to work, then drive out somewhere and find some free parking for the day (or circle the block all day) and then come back at the end of the day to pick me up.


I'd estimate 8-12 years


This is a major reason I'd love to own a Telsa (or car with similar features). Right now I have to avoid driving long distances at night because I have a hard time staying awake. While I would have no intention of taking a full-on nap while on autopilot, it would make sitting behind a wheel a lot safer during long hauls between cities.


No, that would make it much LESS safe.

Instead of now where you force yourself to be fully awake, instead you will allow yourself to semi-snooze, until you hit a situation the computer can't handle - and then you are in real trouble.


This is exactly why I wish these cars would be banned from being on the road at all in their current state. Some people are assuming it means you don't have to pay as much attention. In the hands of your average consumer, these cars are not safe. As an above average user of technology, I would not trust myself to remain in constant control of such a vehicle. I certainly don't trust the average or below average consumer to handle it!

Most who claim they would remain attentive are not aware that they are making false claims, and this likely includes a lot of HN visitors. The better the cars get at remaining on autopilot, the more complacent drivers will become. A really good autopilot is training its users into trusting it to handle everything. If your average commute results in autopilot practically never kicking in, your brain will naturally - regardless of your intent - find something else to occupy your attention. Reading a book in your lap, checking the weather, grooming yourself in the mirror, or even just spacing out and daydreaming.

Frankly I'm of the opinion that self-driving cars are something that will happen in 50+ years. When cities are built with "roads" that are actually tracks; tracks that are designed to handle only these cars and nothing else. I imagine we will not own such cars; they'll be more like subway/metro public transit systems, but with individual pods that hold a small number of people and with hundreds/thousands of small stations to embark/disembark from.

These vehicles should not be allowed until there is no manual takeover. If the vehicle cannot operate at all times on autopilot, it should not be on the road.


One time I saw a car driving very erratically through an intersection and going into a reserved lane they were not allowed to.

The driver was texting while driving.

I've seen more instances since then.

If you ask me, people can't be trusted to be safe anyway. I'd rather have imperfect autonomous cars than people driving.


Is there any evidence that these cars, abused as they are, are less safe when under autopilot than the average driver on the road?

This is the only reference I've seen to that http://electrek.co/2016/04/24/tesla-autopilot-probability-ac....


>> when under autopilot

I care a lot about what happens when autopilot disengages.

I truly and honestly believe it is a biological, psychological fact, that autopilot will result in less attentive occupants in the vehicle. We're going to blame "drivers" - who are really now just passengers, expected to sometimes take the wheel - for accidents caused by autopilots disengaging, because the driver "should have been in control". Except that most people are going to be mentally unable to maintain the kind of focus needed to be able to take over a failed autopilot.

Let me put it this way: I believe that today, if my mother were to cause an accident, she would have done something tangible to be deserving of being "at fault". In a future where my mother can be considered at fault for killing someone because she failed to take over an autopilot in under a second... that is not a future I want to experience.

I'm afraid of the moment we begin blaming people for loss of life caused by technology expecting us to interfere when it is innately difficult to do so.


I disagree. It would be like having a copilot that stays awake with a driver to make sure the driver is okay. Say that once every ten minutes, you nod off for a half second. The probability of the car's autopilot failing at that same exact time is far less than the probability that half second of lost awareness could cause an accident if there was no autopilot.

No one is advocating actually sleeping at the wheel, I thought my OP was clear about that.


A Tesla brings itself to a complete stop safely if you don't assume control when it required you to.


Unless you kick your feet into the brake pedal accidentally whole dozing off and it disengages emergency braking and you plow into an object at full speed while still asleep.


You are assuming it's able to recognize that it needs to do that.

And a complete stop on the freeway is not exactly safe.


Everyone is giving examples of situations where they think an autonomous car will have trouble but many of them boil down to the same thing. Cars will not truly be able to drive as well as humans until they can model humans as other rational agents and interpret subtle environmental cues (like nonstandard signage). Maybe this still isn't strong AI, but it's pretty damn close. So in that regard "fully autonomous" cars are still a ways off, more like decades than years. But I've always thought the industry will still push forward quickly, the core tech is finally getting good enough that there is a strong incentive to just go in and brute force many of the little logic "defects" inherent in the current state of AI. If it can work really really well but only for 70% of weather/driving conditions that still has huge value.


It doesn't need to be perfect, it just needs to be above average. To me, at least.

And given how drivers have gotten WORSE since smart phones hit the scene, I'm ready for these robots right now. I don't feel safe on the roads at all anymore.


4 way stop signs were really confusing for me when I first learned how to drive. It seems there's so many nuances about when you should go or not. Is there even an algorithm available to handle them?


According to a NYT time article [1] they can already do that. I'm sure by this point they can model other vehicles behavior in common situations like this. But ktRolster brings up a good point, there are many situations where a glance is all that is used or needed to signal intent. In the article they mention that at four-way stops they do this by just inching forward and testing the waters, but obviously you don't want to inch forward into a pedestrian or bicycler. Makes me wonder if we might want to standardize the method of communicating intentions to pedestrians across all self-driving cars.

[1] http://www.nytimes.com/2015/09/02/technology/personaltech/go...


Good point, that's a tricky one, especially when you add in pedestrians and bicycles.


Self-driving cars already work well in Google's Mountain View. Mountain View has four way stop signs. So, yes.


With the exception of changing lanes automatically, my Jeep Cherokee does all this stuff.

It does, however, automatically adjust your speed to match the lane you're changing to.

At the end of the day, it's all about sensors and software. I'm sure Tesla has better engineers than Fiat Chrysler but I'm also pretty sure it's off the shelf for them and they just bolt it into the car at this point.

Good stuff, though. Can't wait to see full point A to point B navigation.


"The car had already seen a 60mph (100km/h) speed limit sign, so it set that as its target speed. "

So...what about those signs where kids spray paint 35 to be 85. Will it zoom to 85?


The speed of the zone is also in the maps database


Sure, but putting aside pranks and vandals, the question remains what the AI does when there is a small (or large!) discrepancy between 1) physically-signed and 2) mapped/stored speed limits.

I've driven roads where the GPS swore the speed limit was 20 clicks slower than the signed speed limit. It is not necessarily safer to drive significantly slower than the posted speed limit, since most traffic will be doing at least that and relative speed differences are a contributor to surprises/crashes/injuries on the road.


Some combination of the speed of the cars around you, the speed limit gleaned from posted signs, the speed listed for the road in the GPS map, and the average speed of cars when traffic is within a density range[1] would work.

The reason for using a density range, is that if the cars are too dense, there is likely traffic, and if they're not dense enough, you'll either have outliers skewing the data, or people driving faster than normal, due to the lack of other cars.

[1] When you're using Google Maps, google crowdsources data on how fast traffic is moving, based on all devices using Maps in that viscinity. You can infer traffic density based on sampling the number of people in a given area providing data. I assume that this data would only get more granular as time goes on, as we add crowdsourced data from autonomous vehicles as well. https://googleblog.blogspot.com/2009/08/bright-side-of-sitti...


I live in Atlanta. Our major highway, 285 has digital signage. The speed limits varies by time, car density etc. my. Can be as low as 25mph on a 5 lane highway.

Sometimes the signs say 00 in no traffic lol (I think during testing..?)

I can imagine a "smart highway" where the signs broadcast the current speed limit...and hopefully broadcast the current speed.


It gets totally confused and does the wrong thing forcing the driver to take over. I've driven a Telsa on 101, in the SF Bay Area, where there was a lot of construction. It gets confused by the constructions signs and the extra lines they paint to reroute traffic around their work.


The car doesn't automatically change the cruise speed set point. There's a shortcut (hold the cruise control stalk back for two seconds) to set it to the current speed limit plus your preconfigured offset, but it requires manual intervention for any change.


If kids spray paint 35 to 85, can a human detect it? If a human can, I am sure a machine can as well (at some point).


The numbers are not arbitrary. There are rules that govern the chosen numbers. Over time we learn that a 2 lane road in a commercial district with street trees is a 35 zone while a 2 lane road in a residential district is a 25 zone.

I agree that at some point cars could learn the rules as well.


The question is whether it does, not whether if it's possible.


The human or the machine?


Good article. It captures the wonder / awe of this without going full-tilt fanboy. It keeps in mind other competing technologies to help the audience stay aware of where the other competitors are.


Just curious - at one point, he is going 85 mph in a 75 zone, but the car is on autopilot. He mentioned he had the cruise set at 80. Who is liable for accidents in this situation?


IIRC, the driver is always liable for the vehicle being operated in an unsafe manner; if this is due to the vehicle systems being maintained in a condition that is unsafe, that's still the driver's responsibility.

If the unsafe condition of the vehicle resulted from a manufacturer's defect, this, again IIRC, generally allows those who experience damages to seek compensation from the manufacturer, but doesn't necessarily relieve the driver of liability (legal liability for any particular harm isn't necessarily exclusive in one party.)


For now, and at this level of automation (level 2) it is the drivers fault. When cars reach higher levels of automation (that generally means deciding more things like lane changes, target cruise speed, etc) it then becomes the car's fault and by extension the manufacturers.


Is that written, or just your opinion (professional or otherwise)?


For one, Volvo has decided to take responsibility for accidents that happen during Autopilot driving http://www.volvocars.com/intl/About/Our-Innovation-Brands/In...


That's what eg Google is lobbying for.


The driver, just as if they had set the cruise to 80 themselves.


After watching this and a Volvo movie I wonder what happens when you touch the steering wheel when you are in autopilot mode.

It's nice that you can check your mail or read a news paper in autopilot mode as advertised but doesn't this create the danger of touching the steering wheel, turning autopilot of, while still holding a newspaper in your hand?

Also I can imagine people could fall asleep when the car is driving.


What? Are they actually suggesting you can read the newspaper while driving, as long as you have auto-throttle, auto emergency braking and lane keeping on?


https://www.youtube.com/watch?v=xYqtu39d3CU

In this Volvo movie the driver is opening a note book and making notes and also is watching a movie.

Both Tesla and Volvo have auto pilot mode which still needs an alert driver. But they advertise it like autonomous driving.

Edit: Volvo will launch Level 4 autonomous driving indeed. So they are a step ahead of Tesla. But it's confusing because they also call it auto pilot: http://www.volvocars.com/au/about/innovations/intellisafe/au...


Still waiting for the KnightRider feature where you can talk to your watch to have the car rescue you, i mean pick you up :)


Auto-braking is mandatory in all cars by 2020.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: