There's a lot of bad drivers out there that make a lot of bad decisions, but saying "most" of them are essentially not in control of their vehicles is frankly ridiculous.
I have a 26 mile daily drive, all high-traffic interstate freeways, and I can usually count at least two occurrences per day where I have to take evasive action to avoid a collision – people illegally on their (hand-held) phones, people blowing across three lanes at 75mph and not even bothering to check their mirrors, drivers leaning over in to the back seat on the freeway, people who drift into the wrong lane in a concurrent two-lane left turn, people who tailgate leaving mere inches between them and the car in front of them despite you having nowhere to go, people braking as if their car weighed half of what it actually weighs, et cetera.
I'm honestly not sure what the solution is, but it's (i) legitimately terrifying every single day, and (ii) hard to believe that any other kind of transportation modality would accept the kind of outcomes that humans driving on the US interstate highway system produces.
A self-driving car is still being driven, by a computer that has control, situational awareness, and the ability to recognize and avoid dangerous situations. This demonstration was specifically about removing those three factors.
So what happens when (not if, but when) said computer encounters a fatal error? What happens when future security researchers like the ones in this article manage to break into said computers and manipulate them?
If we're going to condemn researchers for potential danger, then we might as well extend the same courtesy to car-driving AI and the makers thereof.
Danger Checklist:
✔ 70mph
✔ Public highway
✔ Driver not in control