Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're basically asking if the car avoided doing something that would have been more destructive (e.g not hitting 2 people but only hitting one person instead). Or if there was no conflict decision making that happened and the car accidentally ran into the person and it resulted in a fatality.

- AI decides between one life lost as a better potential outcome

- AI made no decision and just happened to hurt someone



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: