Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're raising an interesting point here. Humans pattern match everything (and integrate it into a world model). When something out of the ordinary appears, attention is brought to it immediately. By comparison, today's AIs just match what they can, leaving everything else out. I don't think that FSD can happen as long as that's the case.


I think a good rule to make is that to be fully selfreliant an AI would have to recognize random adjustments that are intentionally wrong or notably confusing for what they are or rather aren't.

I drove to work yesterday and a driver driving over the freshly painted lines to avoid a parked truck had basically caused a swerving lighter set of the lines in the middle of the road to appear. Similarly in my country at construction sites where the road is redirected for a while the new lines are often just in a different colour and the old ones remain. Sometimes they aren't and i've gotten confused at least once.

But what if it was intentional or more pronounced? Perhaps we really should paint ourselves a looney tunes-esque scenario. The kind where our hero paints some lines towards and a fake tunnel on a wall. Except instead of some by radar easily recognised wall it should be some other danger.

Also. If I put a cardboard cutout of a cartoon character next to or on a road will it cause these cars to slow down? Will it take a risk to swerve if the cutout appeared around a corner?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: